00:00:00.000 Started by upstream project "autotest-per-patch" build number 127158 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.059 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/iscsi-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.060 The recommended git tool is: git 00:00:00.060 using credential 00000000-0000-0000-0000-000000000002 00:00:00.062 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/iscsi-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.088 Fetching changes from the remote Git repository 00:00:00.090 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.186 Using shallow fetch with depth 1 00:00:00.186 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.186 > git --version # timeout=10 00:00:00.239 > git --version # 'git version 2.39.2' 00:00:00.241 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.308 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.308 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.471 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.485 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.498 Checking out Revision bd3e126a67c072de18fcd072f7502b1f7801d6ff (FETCH_HEAD) 00:00:06.498 > git config core.sparsecheckout # timeout=10 00:00:06.510 > git read-tree -mu HEAD # timeout=10 00:00:06.562 > git checkout -f bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=5 00:00:06.584 Commit message: "jenkins/autotest: add raid-vg subjob to autotest configs" 00:00:06.584 > git rev-list --no-walk bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=10 00:00:06.673 [Pipeline] Start of Pipeline 00:00:06.687 [Pipeline] library 00:00:06.688 Loading library shm_lib@master 00:00:06.688 Library shm_lib@master is cached. Copying from home. 00:00:06.700 [Pipeline] node 00:00:06.707 Running on VM-host-SM4 in /var/jenkins/workspace/iscsi-vg-autotest 00:00:06.708 [Pipeline] { 00:00:06.718 [Pipeline] catchError 00:00:06.720 [Pipeline] { 00:00:06.733 [Pipeline] wrap 00:00:06.742 [Pipeline] { 00:00:06.747 [Pipeline] stage 00:00:06.748 [Pipeline] { (Prologue) 00:00:06.764 [Pipeline] echo 00:00:06.766 Node: VM-host-SM4 00:00:06.773 [Pipeline] cleanWs 00:00:06.781 [WS-CLEANUP] Deleting project workspace... 00:00:06.781 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.785 [WS-CLEANUP] done 00:00:06.937 [Pipeline] setCustomBuildProperty 00:00:06.996 [Pipeline] httpRequest 00:00:07.018 [Pipeline] echo 00:00:07.019 Sorcerer 10.211.164.101 is alive 00:00:07.024 [Pipeline] httpRequest 00:00:07.027 HttpMethod: GET 00:00:07.028 URL: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:07.028 Sending request to url: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:07.045 Response Code: HTTP/1.1 200 OK 00:00:07.046 Success: Status code 200 is in the accepted range: 200,404 00:00:07.046 Saving response body to /var/jenkins/workspace/iscsi-vg-autotest/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:08.981 [Pipeline] sh 00:00:09.260 + tar --no-same-owner -xf jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:09.275 [Pipeline] httpRequest 00:00:09.298 [Pipeline] echo 00:00:09.300 Sorcerer 10.211.164.101 is alive 00:00:09.307 [Pipeline] httpRequest 00:00:09.312 HttpMethod: GET 00:00:09.313 URL: http://10.211.164.101/packages/spdk_c5d7cded45065dcdca09edf823accb166a29553d.tar.gz 00:00:09.314 Sending request to url: http://10.211.164.101/packages/spdk_c5d7cded45065dcdca09edf823accb166a29553d.tar.gz 00:00:09.323 Response Code: HTTP/1.1 200 OK 00:00:09.324 Success: Status code 200 is in the accepted range: 200,404 00:00:09.324 Saving response body to /var/jenkins/workspace/iscsi-vg-autotest/spdk_c5d7cded45065dcdca09edf823accb166a29553d.tar.gz 00:01:01.185 [Pipeline] sh 00:01:01.458 + tar --no-same-owner -xf spdk_c5d7cded45065dcdca09edf823accb166a29553d.tar.gz 00:01:04.753 [Pipeline] sh 00:01:05.033 + git -C spdk log --oneline -n5 00:01:05.033 c5d7cded4 bdev/compress: print error code information in load compress bdev 00:01:05.033 58883cba9 bdev/compress: release reduce vol resource when comp bdev fails to be created. 00:01:05.033 b8378f94e scripts/pkgdep: Set yum's skip_if_unavailable=True under rocky8 00:01:05.033 c2a77f51e module/bdev/nvme: add detach-monitor poller 00:01:05.033 e14876e17 lib/nvme: add spdk_nvme_scan_attached() 00:01:05.053 [Pipeline] writeFile 00:01:05.069 [Pipeline] sh 00:01:05.347 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:05.358 [Pipeline] sh 00:01:05.634 + cat autorun-spdk.conf 00:01:05.634 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.634 SPDK_TEST_ISCSI_INITIATOR=1 00:01:05.634 SPDK_TEST_ISCSI=1 00:01:05.634 SPDK_TEST_RBD=1 00:01:05.634 SPDK_RUN_UBSAN=1 00:01:05.634 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:05.639 RUN_NIGHTLY=0 00:01:05.642 [Pipeline] } 00:01:05.658 [Pipeline] // stage 00:01:05.672 [Pipeline] stage 00:01:05.674 [Pipeline] { (Run VM) 00:01:05.684 [Pipeline] sh 00:01:05.955 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:05.955 + echo 'Start stage prepare_nvme.sh' 00:01:05.955 Start stage prepare_nvme.sh 00:01:05.955 + [[ -n 3 ]] 00:01:05.955 + disk_prefix=ex3 00:01:05.955 + [[ -n /var/jenkins/workspace/iscsi-vg-autotest ]] 00:01:05.955 + [[ -e /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf ]] 00:01:05.955 + source /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf 00:01:05.955 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.955 ++ SPDK_TEST_ISCSI_INITIATOR=1 00:01:05.955 ++ SPDK_TEST_ISCSI=1 00:01:05.955 ++ SPDK_TEST_RBD=1 00:01:05.955 ++ SPDK_RUN_UBSAN=1 00:01:05.955 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:05.955 ++ RUN_NIGHTLY=0 00:01:05.955 + cd /var/jenkins/workspace/iscsi-vg-autotest 00:01:05.955 + nvme_files=() 00:01:05.955 + declare -A nvme_files 00:01:05.955 + backend_dir=/var/lib/libvirt/images/backends 00:01:05.955 + nvme_files['nvme.img']=5G 00:01:05.955 + nvme_files['nvme-cmb.img']=5G 00:01:05.955 + nvme_files['nvme-multi0.img']=4G 00:01:05.955 + nvme_files['nvme-multi1.img']=4G 00:01:05.955 + nvme_files['nvme-multi2.img']=4G 00:01:05.955 + nvme_files['nvme-openstack.img']=8G 00:01:05.955 + nvme_files['nvme-zns.img']=5G 00:01:05.955 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:05.955 + (( SPDK_TEST_FTL == 1 )) 00:01:05.955 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:05.955 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:05.955 + for nvme in "${!nvme_files[@]}" 00:01:05.955 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:05.955 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.955 + for nvme in "${!nvme_files[@]}" 00:01:05.955 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:05.955 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.955 + for nvme in "${!nvme_files[@]}" 00:01:05.955 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:06.212 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:06.212 + for nvme in "${!nvme_files[@]}" 00:01:06.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:06.212 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:06.212 + for nvme in "${!nvme_files[@]}" 00:01:06.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:06.469 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:06.469 + for nvme in "${!nvme_files[@]}" 00:01:06.469 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:06.469 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:06.469 + for nvme in "${!nvme_files[@]}" 00:01:06.469 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:06.469 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:06.469 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:06.726 + echo 'End stage prepare_nvme.sh' 00:01:06.726 End stage prepare_nvme.sh 00:01:06.745 [Pipeline] sh 00:01:07.087 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:07.087 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora38 00:01:07.087 00:01:07.087 DIR=/var/jenkins/workspace/iscsi-vg-autotest/spdk/scripts/vagrant 00:01:07.087 SPDK_DIR=/var/jenkins/workspace/iscsi-vg-autotest/spdk 00:01:07.087 VAGRANT_TARGET=/var/jenkins/workspace/iscsi-vg-autotest 00:01:07.087 HELP=0 00:01:07.087 DRY_RUN=0 00:01:07.087 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:07.087 NVME_DISKS_TYPE=nvme,nvme, 00:01:07.087 NVME_AUTO_CREATE=0 00:01:07.087 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:07.087 NVME_CMB=,, 00:01:07.087 NVME_PMR=,, 00:01:07.087 NVME_ZNS=,, 00:01:07.087 NVME_MS=,, 00:01:07.087 NVME_FDP=,, 00:01:07.087 SPDK_VAGRANT_DISTRO=fedora38 00:01:07.087 SPDK_VAGRANT_VMCPU=10 00:01:07.087 SPDK_VAGRANT_VMRAM=12288 00:01:07.087 SPDK_VAGRANT_PROVIDER=libvirt 00:01:07.087 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:07.087 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:07.087 SPDK_OPENSTACK_NETWORK=0 00:01:07.087 VAGRANT_PACKAGE_BOX=0 00:01:07.087 VAGRANTFILE=/var/jenkins/workspace/iscsi-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:07.087 FORCE_DISTRO=true 00:01:07.087 VAGRANT_BOX_VERSION= 00:01:07.087 EXTRA_VAGRANTFILES= 00:01:07.087 NIC_MODEL=e1000 00:01:07.087 00:01:07.088 mkdir: created directory '/var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt' 00:01:07.088 /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt /var/jenkins/workspace/iscsi-vg-autotest 00:01:10.368 Bringing machine 'default' up with 'libvirt' provider... 00:01:10.626 ==> default: Creating image (snapshot of base box volume). 00:01:10.885 ==> default: Creating domain with the following settings... 00:01:10.885 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721901703_461ef35fa8579e8f7ee8 00:01:10.885 ==> default: -- Domain type: kvm 00:01:10.885 ==> default: -- Cpus: 10 00:01:10.885 ==> default: -- Feature: acpi 00:01:10.885 ==> default: -- Feature: apic 00:01:10.885 ==> default: -- Feature: pae 00:01:10.885 ==> default: -- Memory: 12288M 00:01:10.885 ==> default: -- Memory Backing: hugepages: 00:01:10.885 ==> default: -- Management MAC: 00:01:10.886 ==> default: -- Loader: 00:01:10.886 ==> default: -- Nvram: 00:01:10.886 ==> default: -- Base box: spdk/fedora38 00:01:10.886 ==> default: -- Storage pool: default 00:01:10.886 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721901703_461ef35fa8579e8f7ee8.img (20G) 00:01:10.886 ==> default: -- Volume Cache: default 00:01:10.886 ==> default: -- Kernel: 00:01:10.886 ==> default: -- Initrd: 00:01:10.886 ==> default: -- Graphics Type: vnc 00:01:10.886 ==> default: -- Graphics Port: -1 00:01:10.886 ==> default: -- Graphics IP: 127.0.0.1 00:01:10.886 ==> default: -- Graphics Password: Not defined 00:01:10.886 ==> default: -- Video Type: cirrus 00:01:10.886 ==> default: -- Video VRAM: 9216 00:01:10.886 ==> default: -- Sound Type: 00:01:10.886 ==> default: -- Keymap: en-us 00:01:10.886 ==> default: -- TPM Path: 00:01:10.886 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:10.886 ==> default: -- Command line args: 00:01:10.886 ==> default: -> value=-device, 00:01:10.886 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:10.886 ==> default: -> value=-drive, 00:01:10.886 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:10.886 ==> default: -> value=-device, 00:01:10.886 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.886 ==> default: -> value=-device, 00:01:10.886 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:10.886 ==> default: -> value=-drive, 00:01:10.886 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:10.886 ==> default: -> value=-device, 00:01:10.886 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.886 ==> default: -> value=-drive, 00:01:10.886 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:10.886 ==> default: -> value=-device, 00:01:10.886 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.886 ==> default: -> value=-drive, 00:01:10.886 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:10.886 ==> default: -> value=-device, 00:01:10.886 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.886 ==> default: Creating shared folders metadata... 00:01:10.886 ==> default: Starting domain. 00:01:13.444 ==> default: Waiting for domain to get an IP address... 00:01:35.368 ==> default: Waiting for SSH to become available... 00:01:35.368 ==> default: Configuring and enabling network interfaces... 00:01:38.650 default: SSH address: 192.168.121.82:22 00:01:38.650 default: SSH username: vagrant 00:01:38.650 default: SSH auth method: private key 00:01:40.547 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:48.693 ==> default: Mounting SSHFS shared folder... 00:01:50.594 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:50.594 ==> default: Checking Mount.. 00:01:51.968 ==> default: Folder Successfully Mounted! 00:01:51.968 ==> default: Running provisioner: file... 00:01:52.977 default: ~/.gitconfig => .gitconfig 00:01:53.233 00:01:53.233 SUCCESS! 00:01:53.233 00:01:53.233 cd to /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:53.233 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:53.233 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:53.233 00:01:53.242 [Pipeline] } 00:01:53.256 [Pipeline] // stage 00:01:53.265 [Pipeline] dir 00:01:53.265 Running in /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt 00:01:53.267 [Pipeline] { 00:01:53.281 [Pipeline] catchError 00:01:53.282 [Pipeline] { 00:01:53.295 [Pipeline] sh 00:01:53.573 + vagrant ssh-config --host vagrant 00:01:53.573 + sed -ne /^Host/,$p 00:01:53.573 + tee ssh_conf 00:01:57.758 Host vagrant 00:01:57.758 HostName 192.168.121.82 00:01:57.758 User vagrant 00:01:57.758 Port 22 00:01:57.758 UserKnownHostsFile /dev/null 00:01:57.758 StrictHostKeyChecking no 00:01:57.758 PasswordAuthentication no 00:01:57.758 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:57.758 IdentitiesOnly yes 00:01:57.758 LogLevel FATAL 00:01:57.758 ForwardAgent yes 00:01:57.758 ForwardX11 yes 00:01:57.758 00:01:57.770 [Pipeline] withEnv 00:01:57.772 [Pipeline] { 00:01:57.788 [Pipeline] sh 00:01:58.065 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:58.065 source /etc/os-release 00:01:58.065 [[ -e /image.version ]] && img=$(< /image.version) 00:01:58.065 # Minimal, systemd-like check. 00:01:58.065 if [[ -e /.dockerenv ]]; then 00:01:58.065 # Clear garbage from the node's name: 00:01:58.065 # agt-er_autotest_547-896 -> autotest_547-896 00:01:58.065 # $HOSTNAME is the actual container id 00:01:58.065 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:58.065 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:58.065 # We can assume this is a mount from a host where container is running, 00:01:58.065 # so fetch its hostname to easily identify the target swarm worker. 00:01:58.065 container="$(< /etc/hostname) ($agent)" 00:01:58.065 else 00:01:58.065 # Fallback 00:01:58.065 container=$agent 00:01:58.065 fi 00:01:58.065 fi 00:01:58.065 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:58.065 00:01:58.335 [Pipeline] } 00:01:58.354 [Pipeline] // withEnv 00:01:58.362 [Pipeline] setCustomBuildProperty 00:01:58.375 [Pipeline] stage 00:01:58.377 [Pipeline] { (Tests) 00:01:58.393 [Pipeline] sh 00:01:58.673 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:58.942 [Pipeline] sh 00:01:59.221 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:59.493 [Pipeline] timeout 00:01:59.494 Timeout set to expire in 45 min 00:01:59.495 [Pipeline] { 00:01:59.511 [Pipeline] sh 00:01:59.847 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:00.412 HEAD is now at c5d7cded4 bdev/compress: print error code information in load compress bdev 00:02:00.424 [Pipeline] sh 00:02:00.701 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:00.971 [Pipeline] sh 00:02:01.247 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:01.519 [Pipeline] sh 00:02:01.846 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=iscsi-vg-autotest ./autoruner.sh spdk_repo 00:02:01.846 ++ readlink -f spdk_repo 00:02:01.846 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:01.846 + [[ -n /home/vagrant/spdk_repo ]] 00:02:01.846 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:01.846 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:01.846 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:01.846 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:01.846 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:01.846 + [[ iscsi-vg-autotest == pkgdep-* ]] 00:02:01.846 + cd /home/vagrant/spdk_repo 00:02:01.846 + source /etc/os-release 00:02:01.846 ++ NAME='Fedora Linux' 00:02:01.846 ++ VERSION='38 (Cloud Edition)' 00:02:01.846 ++ ID=fedora 00:02:01.846 ++ VERSION_ID=38 00:02:01.846 ++ VERSION_CODENAME= 00:02:01.846 ++ PLATFORM_ID=platform:f38 00:02:01.846 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:01.846 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:01.846 ++ LOGO=fedora-logo-icon 00:02:01.846 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:01.846 ++ HOME_URL=https://fedoraproject.org/ 00:02:01.846 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:01.846 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:01.846 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:01.846 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:01.846 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:01.846 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:01.846 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:01.846 ++ SUPPORT_END=2024-05-14 00:02:01.846 ++ VARIANT='Cloud Edition' 00:02:01.846 ++ VARIANT_ID=cloud 00:02:01.846 + uname -a 00:02:01.846 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:01.846 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:02.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:02.413 Hugepages 00:02:02.413 node hugesize free / total 00:02:02.413 node0 1048576kB 0 / 0 00:02:02.413 node0 2048kB 0 / 0 00:02:02.413 00:02:02.413 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:02.413 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:02.413 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:02.413 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:02.671 + rm -f /tmp/spdk-ld-path 00:02:02.671 + source autorun-spdk.conf 00:02:02.671 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.671 ++ SPDK_TEST_ISCSI_INITIATOR=1 00:02:02.671 ++ SPDK_TEST_ISCSI=1 00:02:02.671 ++ SPDK_TEST_RBD=1 00:02:02.671 ++ SPDK_RUN_UBSAN=1 00:02:02.671 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.671 ++ RUN_NIGHTLY=0 00:02:02.671 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:02.671 + [[ -n '' ]] 00:02:02.671 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:02.671 + for M in /var/spdk/build-*-manifest.txt 00:02:02.671 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:02.671 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.671 + for M in /var/spdk/build-*-manifest.txt 00:02:02.671 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:02.671 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.671 ++ uname 00:02:02.671 + [[ Linux == \L\i\n\u\x ]] 00:02:02.671 + sudo dmesg -T 00:02:02.671 + sudo dmesg --clear 00:02:02.671 + dmesg_pid=5172 00:02:02.671 + [[ Fedora Linux == FreeBSD ]] 00:02:02.671 + sudo dmesg -Tw 00:02:02.671 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.671 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.671 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:02.671 + [[ -x /usr/src/fio-static/fio ]] 00:02:02.671 + export FIO_BIN=/usr/src/fio-static/fio 00:02:02.671 + FIO_BIN=/usr/src/fio-static/fio 00:02:02.671 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:02.671 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:02.671 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:02.671 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.671 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.671 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:02.671 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.671 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.671 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:02.671 Test configuration: 00:02:02.671 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.671 SPDK_TEST_ISCSI_INITIATOR=1 00:02:02.671 SPDK_TEST_ISCSI=1 00:02:02.671 SPDK_TEST_RBD=1 00:02:02.671 SPDK_RUN_UBSAN=1 00:02:02.671 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.671 RUN_NIGHTLY=0 10:02:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:02.671 10:02:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:02.671 10:02:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:02.671 10:02:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:02.671 10:02:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.671 10:02:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.671 10:02:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.671 10:02:35 -- paths/export.sh@5 -- $ export PATH 00:02:02.671 10:02:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.671 10:02:35 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:02.671 10:02:35 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:02.671 10:02:35 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721901755.XXXXXX 00:02:02.671 10:02:35 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721901755.1b2vo2 00:02:02.671 10:02:35 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:02.671 10:02:35 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:02.671 10:02:35 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:02.671 10:02:35 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:02.671 10:02:35 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:02.671 10:02:35 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:02.671 10:02:35 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:02.671 10:02:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.929 10:02:35 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-coverage --with-ublk' 00:02:02.929 10:02:35 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:02.929 10:02:35 -- pm/common@17 -- $ local monitor 00:02:02.929 10:02:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.929 10:02:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.929 10:02:35 -- pm/common@21 -- $ date +%s 00:02:02.929 10:02:35 -- pm/common@25 -- $ sleep 1 00:02:02.930 10:02:35 -- pm/common@21 -- $ date +%s 00:02:02.930 10:02:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721901755 00:02:02.930 10:02:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721901755 00:02:02.930 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721901755_collect-vmstat.pm.log 00:02:02.930 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721901755_collect-cpu-load.pm.log 00:02:03.863 10:02:36 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:03.863 10:02:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:03.863 10:02:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:03.863 10:02:36 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:03.863 10:02:36 -- spdk/autobuild.sh@16 -- $ date -u 00:02:03.863 Thu Jul 25 10:02:36 AM UTC 2024 00:02:03.863 10:02:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:03.863 v24.09-pre-304-gc5d7cded4 00:02:03.863 10:02:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:03.863 10:02:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:03.863 10:02:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:03.863 10:02:36 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:03.863 10:02:36 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:03.863 10:02:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.863 ************************************ 00:02:03.863 START TEST ubsan 00:02:03.863 ************************************ 00:02:03.863 using ubsan 00:02:03.863 10:02:36 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:03.863 00:02:03.863 real 0m0.000s 00:02:03.863 user 0m0.000s 00:02:03.863 sys 0m0.000s 00:02:03.863 10:02:36 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:03.863 ************************************ 00:02:03.863 END TEST ubsan 00:02:03.863 ************************************ 00:02:03.863 10:02:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:03.863 10:02:37 -- common/autotest_common.sh@1142 -- $ return 0 00:02:03.863 10:02:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:03.863 10:02:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:03.863 10:02:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:03.863 10:02:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:03.863 10:02:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:03.863 10:02:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:03.863 10:02:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:03.863 10:02:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:03.863 10:02:37 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-coverage --with-ublk --with-shared 00:02:04.121 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:04.121 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:04.378 Using 'verbs' RDMA provider 00:02:20.679 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:32.923 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:32.923 Creating mk/config.mk...done. 00:02:32.923 Creating mk/cc.flags.mk...done. 00:02:32.923 Type 'make' to build. 00:02:32.923 10:03:05 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:32.923 10:03:05 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:32.923 10:03:05 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:32.923 10:03:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.923 ************************************ 00:02:32.923 START TEST make 00:02:32.923 ************************************ 00:02:32.923 10:03:05 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:32.923 make[1]: Nothing to be done for 'all'. 00:02:47.796 The Meson build system 00:02:47.796 Version: 1.3.1 00:02:47.796 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:47.796 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:47.796 Build type: native build 00:02:47.796 Program cat found: YES (/usr/bin/cat) 00:02:47.796 Project name: DPDK 00:02:47.796 Project version: 24.03.0 00:02:47.796 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:47.796 C linker for the host machine: cc ld.bfd 2.39-16 00:02:47.796 Host machine cpu family: x86_64 00:02:47.796 Host machine cpu: x86_64 00:02:47.796 Message: ## Building in Developer Mode ## 00:02:47.796 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:47.796 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:47.796 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:47.796 Program python3 found: YES (/usr/bin/python3) 00:02:47.796 Program cat found: YES (/usr/bin/cat) 00:02:47.796 Compiler for C supports arguments -march=native: YES 00:02:47.796 Checking for size of "void *" : 8 00:02:47.796 Checking for size of "void *" : 8 (cached) 00:02:47.796 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:47.796 Library m found: YES 00:02:47.796 Library numa found: YES 00:02:47.796 Has header "numaif.h" : YES 00:02:47.796 Library fdt found: NO 00:02:47.796 Library execinfo found: NO 00:02:47.796 Has header "execinfo.h" : YES 00:02:47.796 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:47.796 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:47.796 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:47.796 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:47.796 Run-time dependency openssl found: YES 3.0.9 00:02:47.796 Run-time dependency libpcap found: YES 1.10.4 00:02:47.796 Has header "pcap.h" with dependency libpcap: YES 00:02:47.796 Compiler for C supports arguments -Wcast-qual: YES 00:02:47.796 Compiler for C supports arguments -Wdeprecated: YES 00:02:47.796 Compiler for C supports arguments -Wformat: YES 00:02:47.796 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:47.796 Compiler for C supports arguments -Wformat-security: NO 00:02:47.796 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:47.796 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:47.796 Compiler for C supports arguments -Wnested-externs: YES 00:02:47.796 Compiler for C supports arguments -Wold-style-definition: YES 00:02:47.796 Compiler for C supports arguments -Wpointer-arith: YES 00:02:47.796 Compiler for C supports arguments -Wsign-compare: YES 00:02:47.796 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:47.796 Compiler for C supports arguments -Wundef: YES 00:02:47.796 Compiler for C supports arguments -Wwrite-strings: YES 00:02:47.796 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:47.796 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:47.796 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:47.796 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:47.796 Program objdump found: YES (/usr/bin/objdump) 00:02:47.796 Compiler for C supports arguments -mavx512f: YES 00:02:47.796 Checking if "AVX512 checking" compiles: YES 00:02:47.796 Fetching value of define "__SSE4_2__" : 1 00:02:47.796 Fetching value of define "__AES__" : 1 00:02:47.796 Fetching value of define "__AVX__" : 1 00:02:47.796 Fetching value of define "__AVX2__" : 1 00:02:47.796 Fetching value of define "__AVX512BW__" : 1 00:02:47.796 Fetching value of define "__AVX512CD__" : 1 00:02:47.796 Fetching value of define "__AVX512DQ__" : 1 00:02:47.796 Fetching value of define "__AVX512F__" : 1 00:02:47.796 Fetching value of define "__AVX512VL__" : 1 00:02:47.796 Fetching value of define "__PCLMUL__" : 1 00:02:47.796 Fetching value of define "__RDRND__" : 1 00:02:47.796 Fetching value of define "__RDSEED__" : 1 00:02:47.796 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:47.796 Fetching value of define "__znver1__" : (undefined) 00:02:47.796 Fetching value of define "__znver2__" : (undefined) 00:02:47.796 Fetching value of define "__znver3__" : (undefined) 00:02:47.796 Fetching value of define "__znver4__" : (undefined) 00:02:47.796 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:47.796 Message: lib/log: Defining dependency "log" 00:02:47.796 Message: lib/kvargs: Defining dependency "kvargs" 00:02:47.796 Message: lib/telemetry: Defining dependency "telemetry" 00:02:47.796 Checking for function "getentropy" : NO 00:02:47.796 Message: lib/eal: Defining dependency "eal" 00:02:47.796 Message: lib/ring: Defining dependency "ring" 00:02:47.796 Message: lib/rcu: Defining dependency "rcu" 00:02:47.796 Message: lib/mempool: Defining dependency "mempool" 00:02:47.796 Message: lib/mbuf: Defining dependency "mbuf" 00:02:47.796 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:47.796 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:47.796 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:47.796 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:47.796 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:47.796 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:47.796 Compiler for C supports arguments -mpclmul: YES 00:02:47.796 Compiler for C supports arguments -maes: YES 00:02:47.796 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:47.796 Compiler for C supports arguments -mavx512bw: YES 00:02:47.796 Compiler for C supports arguments -mavx512dq: YES 00:02:47.796 Compiler for C supports arguments -mavx512vl: YES 00:02:47.796 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:47.796 Compiler for C supports arguments -mavx2: YES 00:02:47.796 Compiler for C supports arguments -mavx: YES 00:02:47.796 Message: lib/net: Defining dependency "net" 00:02:47.796 Message: lib/meter: Defining dependency "meter" 00:02:47.796 Message: lib/ethdev: Defining dependency "ethdev" 00:02:47.796 Message: lib/pci: Defining dependency "pci" 00:02:47.796 Message: lib/cmdline: Defining dependency "cmdline" 00:02:47.796 Message: lib/hash: Defining dependency "hash" 00:02:47.796 Message: lib/timer: Defining dependency "timer" 00:02:47.796 Message: lib/compressdev: Defining dependency "compressdev" 00:02:47.796 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:47.796 Message: lib/dmadev: Defining dependency "dmadev" 00:02:47.796 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:47.796 Message: lib/power: Defining dependency "power" 00:02:47.796 Message: lib/reorder: Defining dependency "reorder" 00:02:47.796 Message: lib/security: Defining dependency "security" 00:02:47.796 Has header "linux/userfaultfd.h" : YES 00:02:47.796 Has header "linux/vduse.h" : YES 00:02:47.796 Message: lib/vhost: Defining dependency "vhost" 00:02:47.796 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:47.796 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:47.796 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:47.796 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:47.796 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:47.796 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:47.796 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:47.796 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:47.796 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:47.796 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:47.796 Program doxygen found: YES (/usr/bin/doxygen) 00:02:47.796 Configuring doxy-api-html.conf using configuration 00:02:47.796 Configuring doxy-api-man.conf using configuration 00:02:47.796 Program mandb found: YES (/usr/bin/mandb) 00:02:47.796 Program sphinx-build found: NO 00:02:47.796 Configuring rte_build_config.h using configuration 00:02:47.796 Message: 00:02:47.796 ================= 00:02:47.796 Applications Enabled 00:02:47.796 ================= 00:02:47.796 00:02:47.796 apps: 00:02:47.796 00:02:47.796 00:02:47.796 Message: 00:02:47.796 ================= 00:02:47.796 Libraries Enabled 00:02:47.796 ================= 00:02:47.796 00:02:47.796 libs: 00:02:47.796 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:47.796 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:47.796 cryptodev, dmadev, power, reorder, security, vhost, 00:02:47.796 00:02:47.796 Message: 00:02:47.796 =============== 00:02:47.796 Drivers Enabled 00:02:47.796 =============== 00:02:47.796 00:02:47.796 common: 00:02:47.796 00:02:47.796 bus: 00:02:47.796 pci, vdev, 00:02:47.796 mempool: 00:02:47.796 ring, 00:02:47.796 dma: 00:02:47.796 00:02:47.796 net: 00:02:47.796 00:02:47.796 crypto: 00:02:47.796 00:02:47.796 compress: 00:02:47.796 00:02:47.796 vdpa: 00:02:47.796 00:02:47.796 00:02:47.796 Message: 00:02:47.796 ================= 00:02:47.796 Content Skipped 00:02:47.796 ================= 00:02:47.796 00:02:47.796 apps: 00:02:47.796 dumpcap: explicitly disabled via build config 00:02:47.796 graph: explicitly disabled via build config 00:02:47.796 pdump: explicitly disabled via build config 00:02:47.796 proc-info: explicitly disabled via build config 00:02:47.796 test-acl: explicitly disabled via build config 00:02:47.796 test-bbdev: explicitly disabled via build config 00:02:47.796 test-cmdline: explicitly disabled via build config 00:02:47.796 test-compress-perf: explicitly disabled via build config 00:02:47.796 test-crypto-perf: explicitly disabled via build config 00:02:47.796 test-dma-perf: explicitly disabled via build config 00:02:47.797 test-eventdev: explicitly disabled via build config 00:02:47.797 test-fib: explicitly disabled via build config 00:02:47.797 test-flow-perf: explicitly disabled via build config 00:02:47.797 test-gpudev: explicitly disabled via build config 00:02:47.797 test-mldev: explicitly disabled via build config 00:02:47.797 test-pipeline: explicitly disabled via build config 00:02:47.797 test-pmd: explicitly disabled via build config 00:02:47.797 test-regex: explicitly disabled via build config 00:02:47.797 test-sad: explicitly disabled via build config 00:02:47.797 test-security-perf: explicitly disabled via build config 00:02:47.797 00:02:47.797 libs: 00:02:47.797 argparse: explicitly disabled via build config 00:02:47.797 metrics: explicitly disabled via build config 00:02:47.797 acl: explicitly disabled via build config 00:02:47.797 bbdev: explicitly disabled via build config 00:02:47.797 bitratestats: explicitly disabled via build config 00:02:47.797 bpf: explicitly disabled via build config 00:02:47.797 cfgfile: explicitly disabled via build config 00:02:47.797 distributor: explicitly disabled via build config 00:02:47.797 efd: explicitly disabled via build config 00:02:47.797 eventdev: explicitly disabled via build config 00:02:47.797 dispatcher: explicitly disabled via build config 00:02:47.797 gpudev: explicitly disabled via build config 00:02:47.797 gro: explicitly disabled via build config 00:02:47.797 gso: explicitly disabled via build config 00:02:47.797 ip_frag: explicitly disabled via build config 00:02:47.797 jobstats: explicitly disabled via build config 00:02:47.797 latencystats: explicitly disabled via build config 00:02:47.797 lpm: explicitly disabled via build config 00:02:47.797 member: explicitly disabled via build config 00:02:47.797 pcapng: explicitly disabled via build config 00:02:47.797 rawdev: explicitly disabled via build config 00:02:47.797 regexdev: explicitly disabled via build config 00:02:47.797 mldev: explicitly disabled via build config 00:02:47.797 rib: explicitly disabled via build config 00:02:47.797 sched: explicitly disabled via build config 00:02:47.797 stack: explicitly disabled via build config 00:02:47.797 ipsec: explicitly disabled via build config 00:02:47.797 pdcp: explicitly disabled via build config 00:02:47.797 fib: explicitly disabled via build config 00:02:47.797 port: explicitly disabled via build config 00:02:47.797 pdump: explicitly disabled via build config 00:02:47.797 table: explicitly disabled via build config 00:02:47.797 pipeline: explicitly disabled via build config 00:02:47.797 graph: explicitly disabled via build config 00:02:47.797 node: explicitly disabled via build config 00:02:47.797 00:02:47.797 drivers: 00:02:47.797 common/cpt: not in enabled drivers build config 00:02:47.797 common/dpaax: not in enabled drivers build config 00:02:47.797 common/iavf: not in enabled drivers build config 00:02:47.797 common/idpf: not in enabled drivers build config 00:02:47.797 common/ionic: not in enabled drivers build config 00:02:47.797 common/mvep: not in enabled drivers build config 00:02:47.797 common/octeontx: not in enabled drivers build config 00:02:47.797 bus/auxiliary: not in enabled drivers build config 00:02:47.797 bus/cdx: not in enabled drivers build config 00:02:47.797 bus/dpaa: not in enabled drivers build config 00:02:47.797 bus/fslmc: not in enabled drivers build config 00:02:47.797 bus/ifpga: not in enabled drivers build config 00:02:47.797 bus/platform: not in enabled drivers build config 00:02:47.797 bus/uacce: not in enabled drivers build config 00:02:47.797 bus/vmbus: not in enabled drivers build config 00:02:47.797 common/cnxk: not in enabled drivers build config 00:02:47.797 common/mlx5: not in enabled drivers build config 00:02:47.797 common/nfp: not in enabled drivers build config 00:02:47.797 common/nitrox: not in enabled drivers build config 00:02:47.797 common/qat: not in enabled drivers build config 00:02:47.797 common/sfc_efx: not in enabled drivers build config 00:02:47.797 mempool/bucket: not in enabled drivers build config 00:02:47.797 mempool/cnxk: not in enabled drivers build config 00:02:47.797 mempool/dpaa: not in enabled drivers build config 00:02:47.797 mempool/dpaa2: not in enabled drivers build config 00:02:47.797 mempool/octeontx: not in enabled drivers build config 00:02:47.797 mempool/stack: not in enabled drivers build config 00:02:47.797 dma/cnxk: not in enabled drivers build config 00:02:47.797 dma/dpaa: not in enabled drivers build config 00:02:47.797 dma/dpaa2: not in enabled drivers build config 00:02:47.797 dma/hisilicon: not in enabled drivers build config 00:02:47.797 dma/idxd: not in enabled drivers build config 00:02:47.797 dma/ioat: not in enabled drivers build config 00:02:47.797 dma/skeleton: not in enabled drivers build config 00:02:47.797 net/af_packet: not in enabled drivers build config 00:02:47.797 net/af_xdp: not in enabled drivers build config 00:02:47.797 net/ark: not in enabled drivers build config 00:02:47.797 net/atlantic: not in enabled drivers build config 00:02:47.797 net/avp: not in enabled drivers build config 00:02:47.797 net/axgbe: not in enabled drivers build config 00:02:47.797 net/bnx2x: not in enabled drivers build config 00:02:47.797 net/bnxt: not in enabled drivers build config 00:02:47.797 net/bonding: not in enabled drivers build config 00:02:47.797 net/cnxk: not in enabled drivers build config 00:02:47.797 net/cpfl: not in enabled drivers build config 00:02:47.797 net/cxgbe: not in enabled drivers build config 00:02:47.797 net/dpaa: not in enabled drivers build config 00:02:47.797 net/dpaa2: not in enabled drivers build config 00:02:47.797 net/e1000: not in enabled drivers build config 00:02:47.797 net/ena: not in enabled drivers build config 00:02:47.797 net/enetc: not in enabled drivers build config 00:02:47.797 net/enetfec: not in enabled drivers build config 00:02:47.797 net/enic: not in enabled drivers build config 00:02:47.797 net/failsafe: not in enabled drivers build config 00:02:47.797 net/fm10k: not in enabled drivers build config 00:02:47.797 net/gve: not in enabled drivers build config 00:02:47.797 net/hinic: not in enabled drivers build config 00:02:47.797 net/hns3: not in enabled drivers build config 00:02:47.797 net/i40e: not in enabled drivers build config 00:02:47.797 net/iavf: not in enabled drivers build config 00:02:47.797 net/ice: not in enabled drivers build config 00:02:47.797 net/idpf: not in enabled drivers build config 00:02:47.797 net/igc: not in enabled drivers build config 00:02:47.797 net/ionic: not in enabled drivers build config 00:02:47.797 net/ipn3ke: not in enabled drivers build config 00:02:47.797 net/ixgbe: not in enabled drivers build config 00:02:47.797 net/mana: not in enabled drivers build config 00:02:47.797 net/memif: not in enabled drivers build config 00:02:47.797 net/mlx4: not in enabled drivers build config 00:02:47.797 net/mlx5: not in enabled drivers build config 00:02:47.797 net/mvneta: not in enabled drivers build config 00:02:47.797 net/mvpp2: not in enabled drivers build config 00:02:47.797 net/netvsc: not in enabled drivers build config 00:02:47.797 net/nfb: not in enabled drivers build config 00:02:47.797 net/nfp: not in enabled drivers build config 00:02:47.797 net/ngbe: not in enabled drivers build config 00:02:47.797 net/null: not in enabled drivers build config 00:02:47.797 net/octeontx: not in enabled drivers build config 00:02:47.797 net/octeon_ep: not in enabled drivers build config 00:02:47.797 net/pcap: not in enabled drivers build config 00:02:47.797 net/pfe: not in enabled drivers build config 00:02:47.797 net/qede: not in enabled drivers build config 00:02:47.797 net/ring: not in enabled drivers build config 00:02:47.797 net/sfc: not in enabled drivers build config 00:02:47.797 net/softnic: not in enabled drivers build config 00:02:47.797 net/tap: not in enabled drivers build config 00:02:47.797 net/thunderx: not in enabled drivers build config 00:02:47.797 net/txgbe: not in enabled drivers build config 00:02:47.797 net/vdev_netvsc: not in enabled drivers build config 00:02:47.797 net/vhost: not in enabled drivers build config 00:02:47.797 net/virtio: not in enabled drivers build config 00:02:47.797 net/vmxnet3: not in enabled drivers build config 00:02:47.797 raw/*: missing internal dependency, "rawdev" 00:02:47.797 crypto/armv8: not in enabled drivers build config 00:02:47.797 crypto/bcmfs: not in enabled drivers build config 00:02:47.797 crypto/caam_jr: not in enabled drivers build config 00:02:47.797 crypto/ccp: not in enabled drivers build config 00:02:47.797 crypto/cnxk: not in enabled drivers build config 00:02:47.797 crypto/dpaa_sec: not in enabled drivers build config 00:02:47.797 crypto/dpaa2_sec: not in enabled drivers build config 00:02:47.797 crypto/ipsec_mb: not in enabled drivers build config 00:02:47.797 crypto/mlx5: not in enabled drivers build config 00:02:47.797 crypto/mvsam: not in enabled drivers build config 00:02:47.797 crypto/nitrox: not in enabled drivers build config 00:02:47.797 crypto/null: not in enabled drivers build config 00:02:47.797 crypto/octeontx: not in enabled drivers build config 00:02:47.797 crypto/openssl: not in enabled drivers build config 00:02:47.797 crypto/scheduler: not in enabled drivers build config 00:02:47.797 crypto/uadk: not in enabled drivers build config 00:02:47.797 crypto/virtio: not in enabled drivers build config 00:02:47.797 compress/isal: not in enabled drivers build config 00:02:47.797 compress/mlx5: not in enabled drivers build config 00:02:47.798 compress/nitrox: not in enabled drivers build config 00:02:47.798 compress/octeontx: not in enabled drivers build config 00:02:47.798 compress/zlib: not in enabled drivers build config 00:02:47.798 regex/*: missing internal dependency, "regexdev" 00:02:47.798 ml/*: missing internal dependency, "mldev" 00:02:47.798 vdpa/ifc: not in enabled drivers build config 00:02:47.798 vdpa/mlx5: not in enabled drivers build config 00:02:47.798 vdpa/nfp: not in enabled drivers build config 00:02:47.798 vdpa/sfc: not in enabled drivers build config 00:02:47.798 event/*: missing internal dependency, "eventdev" 00:02:47.798 baseband/*: missing internal dependency, "bbdev" 00:02:47.798 gpu/*: missing internal dependency, "gpudev" 00:02:47.798 00:02:47.798 00:02:47.798 Build targets in project: 85 00:02:47.798 00:02:47.798 DPDK 24.03.0 00:02:47.798 00:02:47.798 User defined options 00:02:47.798 buildtype : debug 00:02:47.798 default_library : shared 00:02:47.798 libdir : lib 00:02:47.798 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:47.798 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:47.798 c_link_args : 00:02:47.798 cpu_instruction_set: native 00:02:47.798 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:47.798 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:47.798 enable_docs : false 00:02:47.798 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:47.798 enable_kmods : false 00:02:47.798 max_lcores : 128 00:02:47.798 tests : false 00:02:47.798 00:02:47.798 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:47.798 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:47.798 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:47.798 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:47.798 [3/268] Linking static target lib/librte_log.a 00:02:47.798 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:47.798 [5/268] Linking static target lib/librte_kvargs.a 00:02:47.798 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:47.798 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:47.798 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:47.798 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:47.798 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:47.798 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:47.798 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:47.798 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.055 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:48.055 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:48.055 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:48.055 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:48.055 [18/268] Linking static target lib/librte_telemetry.a 00:02:48.337 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.337 [20/268] Linking target lib/librte_log.so.24.1 00:02:48.337 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:48.595 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:48.595 [23/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:48.595 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:48.595 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:48.595 [26/268] Linking target lib/librte_kvargs.so.24.1 00:02:48.595 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:48.595 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:48.852 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:48.852 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:48.852 [31/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:48.852 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:49.110 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:49.110 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:49.110 [35/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.110 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:49.368 [37/268] Linking target lib/librte_telemetry.so.24.1 00:02:49.368 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:49.368 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:49.368 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:49.368 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:49.627 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:49.627 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:49.627 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:49.627 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:49.627 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:49.627 [47/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:49.627 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:49.884 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:49.884 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:49.884 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:50.141 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:50.141 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:50.141 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:50.141 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:50.399 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:50.399 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:50.399 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:50.399 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:50.656 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:50.656 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:50.913 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:50.913 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:50.913 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:50.913 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:50.913 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:50.913 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:51.170 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:51.170 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:51.170 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:51.428 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:51.428 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:51.428 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:51.428 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:51.428 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:51.428 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:51.685 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:51.685 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:51.685 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:51.685 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:51.942 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:51.942 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:51.942 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:51.942 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:51.942 [85/268] Linking static target lib/librte_ring.a 00:02:52.199 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:52.199 [87/268] Linking static target lib/librte_eal.a 00:02:52.457 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:52.457 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:52.457 [90/268] Linking static target lib/librte_rcu.a 00:02:52.457 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:52.457 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:52.457 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:52.457 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:52.457 [95/268] Linking static target lib/librte_mempool.a 00:02:52.714 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.714 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:52.971 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:52.971 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:52.971 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:52.971 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.262 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:53.262 [103/268] Linking static target lib/librte_mbuf.a 00:02:53.524 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:53.524 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:53.524 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:53.524 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:53.783 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:53.783 [109/268] Linking static target lib/librte_net.a 00:02:53.783 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:53.783 [111/268] Linking static target lib/librte_meter.a 00:02:54.040 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:54.040 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.298 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.298 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.298 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:54.556 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:54.814 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:54.814 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.071 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:55.329 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:55.329 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:55.329 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:55.588 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:55.588 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:55.588 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:55.588 [127/268] Linking static target lib/librte_pci.a 00:02:55.588 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:55.845 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:55.845 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:55.845 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:56.167 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:56.167 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:56.167 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:56.167 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.167 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:56.167 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:56.167 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:56.167 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:56.167 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:56.167 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:56.167 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:56.167 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:56.167 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:56.426 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:56.426 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:56.426 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:56.426 [148/268] Linking static target lib/librte_ethdev.a 00:02:56.684 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:56.684 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:56.684 [151/268] Linking static target lib/librte_cmdline.a 00:02:56.943 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:56.943 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:56.943 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:57.201 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:57.201 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:57.201 [157/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:57.201 [158/268] Linking static target lib/librte_hash.a 00:02:57.201 [159/268] Linking static target lib/librte_timer.a 00:02:57.459 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:57.459 [161/268] Linking static target lib/librte_compressdev.a 00:02:57.459 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:57.717 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:57.717 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:57.717 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:57.975 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:57.975 [167/268] Linking static target lib/librte_dmadev.a 00:02:57.975 [168/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.232 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:58.232 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:58.232 [171/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:58.232 [172/268] Linking static target lib/librte_cryptodev.a 00:02:58.232 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:58.232 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:58.490 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.748 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.748 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:58.748 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:58.748 [179/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.748 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:59.007 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.007 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:59.007 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:59.007 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:59.268 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:59.268 [186/268] Linking static target lib/librte_reorder.a 00:02:59.525 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:59.525 [188/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:59.525 [189/268] Linking static target lib/librte_power.a 00:02:59.525 [190/268] Linking static target lib/librte_security.a 00:02:59.525 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:59.525 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:59.782 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:00.040 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:00.040 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.298 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:00.556 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:00.556 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.556 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:00.556 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:00.815 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:00.815 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:01.072 [203/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.072 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:01.072 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:01.330 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.330 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:01.330 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:01.330 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:01.330 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:01.588 [211/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:01.588 [212/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:01.588 [213/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:01.588 [214/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:01.588 [215/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:01.588 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:01.588 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:01.588 [218/268] Linking static target drivers/librte_bus_vdev.a 00:03:01.846 [219/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:01.846 [220/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.846 [221/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.846 [222/268] Linking static target drivers/librte_mempool_ring.a 00:03:01.846 [223/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:01.846 [224/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.846 [225/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.846 [226/268] Linking static target drivers/librte_bus_pci.a 00:03:02.105 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.672 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.929 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:03.186 [230/268] Linking static target lib/librte_vhost.a 00:03:04.560 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.125 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.125 [233/268] Linking target lib/librte_eal.so.24.1 00:03:05.382 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:05.382 [235/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.382 [236/268] Linking target lib/librte_meter.so.24.1 00:03:05.382 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:05.382 [238/268] Linking target lib/librte_timer.so.24.1 00:03:05.382 [239/268] Linking target lib/librte_pci.so.24.1 00:03:05.382 [240/268] Linking target lib/librte_ring.so.24.1 00:03:05.382 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:05.382 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:05.382 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:05.382 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:05.382 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:05.640 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:05.640 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:05.640 [248/268] Linking target lib/librte_rcu.so.24.1 00:03:05.640 [249/268] Linking target lib/librte_mempool.so.24.1 00:03:05.640 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:05.640 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:05.935 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:05.935 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:05.935 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:05.935 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:05.935 [256/268] Linking target lib/librte_net.so.24.1 00:03:05.935 [257/268] Linking target lib/librte_reorder.so.24.1 00:03:05.935 [258/268] Linking target lib/librte_compressdev.so.24.1 00:03:06.194 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:06.194 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:06.194 [261/268] Linking target lib/librte_security.so.24.1 00:03:06.194 [262/268] Linking target lib/librte_hash.so.24.1 00:03:06.194 [263/268] Linking target lib/librte_cmdline.so.24.1 00:03:06.194 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:06.453 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:06.453 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:06.453 [267/268] Linking target lib/librte_power.so.24.1 00:03:06.453 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:06.453 INFO: autodetecting backend as ninja 00:03:06.453 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:07.860 CC lib/log/log.o 00:03:07.860 CC lib/ut_mock/mock.o 00:03:07.860 CC lib/ut/ut.o 00:03:07.860 CC lib/log/log_deprecated.o 00:03:07.860 CC lib/log/log_flags.o 00:03:07.860 LIB libspdk_ut_mock.a 00:03:07.860 SO libspdk_ut_mock.so.6.0 00:03:07.860 LIB libspdk_log.a 00:03:07.860 LIB libspdk_ut.a 00:03:08.118 SO libspdk_log.so.7.0 00:03:08.118 SO libspdk_ut.so.2.0 00:03:08.118 SYMLINK libspdk_ut_mock.so 00:03:08.118 SYMLINK libspdk_ut.so 00:03:08.118 SYMLINK libspdk_log.so 00:03:08.376 CC lib/dma/dma.o 00:03:08.376 CXX lib/trace_parser/trace.o 00:03:08.376 CC lib/util/bit_array.o 00:03:08.376 CC lib/util/base64.o 00:03:08.376 CC lib/util/crc16.o 00:03:08.376 CC lib/util/crc32.o 00:03:08.376 CC lib/util/crc32c.o 00:03:08.376 CC lib/util/cpuset.o 00:03:08.376 CC lib/ioat/ioat.o 00:03:08.376 CC lib/util/crc32_ieee.o 00:03:08.376 CC lib/vfio_user/host/vfio_user_pci.o 00:03:08.376 CC lib/util/crc64.o 00:03:08.376 CC lib/util/dif.o 00:03:08.635 CC lib/util/fd.o 00:03:08.635 LIB libspdk_dma.a 00:03:08.635 SO libspdk_dma.so.4.0 00:03:08.635 SYMLINK libspdk_dma.so 00:03:08.635 CC lib/vfio_user/host/vfio_user.o 00:03:08.635 CC lib/util/fd_group.o 00:03:08.635 CC lib/util/file.o 00:03:08.635 CC lib/util/hexlify.o 00:03:08.635 CC lib/util/iov.o 00:03:08.635 CC lib/util/math.o 00:03:08.893 LIB libspdk_ioat.a 00:03:08.893 SO libspdk_ioat.so.7.0 00:03:08.893 CC lib/util/net.o 00:03:08.893 SYMLINK libspdk_ioat.so 00:03:08.893 CC lib/util/pipe.o 00:03:08.893 CC lib/util/strerror_tls.o 00:03:08.893 CC lib/util/string.o 00:03:08.893 CC lib/util/uuid.o 00:03:09.151 LIB libspdk_vfio_user.a 00:03:09.151 CC lib/util/xor.o 00:03:09.151 CC lib/util/zipf.o 00:03:09.151 SO libspdk_vfio_user.so.5.0 00:03:09.151 SYMLINK libspdk_vfio_user.so 00:03:09.151 LIB libspdk_util.a 00:03:09.409 SO libspdk_util.so.10.0 00:03:09.409 LIB libspdk_trace_parser.a 00:03:09.409 SO libspdk_trace_parser.so.5.0 00:03:09.667 SYMLINK libspdk_util.so 00:03:09.667 SYMLINK libspdk_trace_parser.so 00:03:09.667 CC lib/rdma_provider/common.o 00:03:09.667 CC lib/conf/conf.o 00:03:09.667 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:09.667 CC lib/json/json_parse.o 00:03:09.667 CC lib/json/json_util.o 00:03:09.667 CC lib/json/json_write.o 00:03:09.667 CC lib/env_dpdk/env.o 00:03:09.667 CC lib/rdma_utils/rdma_utils.o 00:03:09.667 CC lib/idxd/idxd.o 00:03:09.667 CC lib/vmd/vmd.o 00:03:09.926 CC lib/env_dpdk/memory.o 00:03:09.926 LIB libspdk_rdma_provider.a 00:03:09.926 LIB libspdk_conf.a 00:03:09.926 CC lib/env_dpdk/pci.o 00:03:09.926 CC lib/env_dpdk/init.o 00:03:09.926 SO libspdk_rdma_provider.so.6.0 00:03:09.926 SO libspdk_conf.so.6.0 00:03:09.926 LIB libspdk_json.a 00:03:09.926 LIB libspdk_rdma_utils.a 00:03:09.926 SYMLINK libspdk_conf.so 00:03:09.926 SYMLINK libspdk_rdma_provider.so 00:03:09.926 CC lib/env_dpdk/threads.o 00:03:09.926 CC lib/env_dpdk/pci_ioat.o 00:03:10.184 SO libspdk_json.so.6.0 00:03:10.184 SO libspdk_rdma_utils.so.1.0 00:03:10.184 SYMLINK libspdk_rdma_utils.so 00:03:10.184 CC lib/env_dpdk/pci_virtio.o 00:03:10.184 SYMLINK libspdk_json.so 00:03:10.184 CC lib/env_dpdk/pci_vmd.o 00:03:10.184 CC lib/env_dpdk/pci_idxd.o 00:03:10.184 CC lib/vmd/led.o 00:03:10.184 CC lib/idxd/idxd_user.o 00:03:10.184 CC lib/env_dpdk/pci_event.o 00:03:10.184 CC lib/idxd/idxd_kernel.o 00:03:10.184 CC lib/env_dpdk/sigbus_handler.o 00:03:10.442 CC lib/env_dpdk/pci_dpdk.o 00:03:10.442 LIB libspdk_vmd.a 00:03:10.442 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:10.442 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:10.442 SO libspdk_vmd.so.6.0 00:03:10.442 SYMLINK libspdk_vmd.so 00:03:10.442 LIB libspdk_idxd.a 00:03:10.442 SO libspdk_idxd.so.12.0 00:03:10.701 SYMLINK libspdk_idxd.so 00:03:10.701 CC lib/jsonrpc/jsonrpc_server.o 00:03:10.701 CC lib/jsonrpc/jsonrpc_client.o 00:03:10.701 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:10.701 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:11.031 LIB libspdk_jsonrpc.a 00:03:11.031 SO libspdk_jsonrpc.so.6.0 00:03:11.031 LIB libspdk_env_dpdk.a 00:03:11.031 SYMLINK libspdk_jsonrpc.so 00:03:11.288 SO libspdk_env_dpdk.so.15.0 00:03:11.288 SYMLINK libspdk_env_dpdk.so 00:03:11.288 CC lib/rpc/rpc.o 00:03:11.545 LIB libspdk_rpc.a 00:03:11.545 SO libspdk_rpc.so.6.0 00:03:11.803 SYMLINK libspdk_rpc.so 00:03:11.803 CC lib/keyring/keyring.o 00:03:11.803 CC lib/keyring/keyring_rpc.o 00:03:11.803 CC lib/notify/notify_rpc.o 00:03:11.803 CC lib/notify/notify.o 00:03:12.061 CC lib/trace/trace.o 00:03:12.061 CC lib/trace/trace_flags.o 00:03:12.061 CC lib/trace/trace_rpc.o 00:03:12.061 LIB libspdk_notify.a 00:03:12.061 SO libspdk_notify.so.6.0 00:03:12.061 LIB libspdk_keyring.a 00:03:12.061 SYMLINK libspdk_notify.so 00:03:12.319 SO libspdk_keyring.so.1.0 00:03:12.319 LIB libspdk_trace.a 00:03:12.319 SYMLINK libspdk_keyring.so 00:03:12.319 SO libspdk_trace.so.10.0 00:03:12.319 SYMLINK libspdk_trace.so 00:03:12.576 CC lib/sock/sock_rpc.o 00:03:12.576 CC lib/sock/sock.o 00:03:12.576 CC lib/thread/thread.o 00:03:12.576 CC lib/thread/iobuf.o 00:03:13.144 LIB libspdk_sock.a 00:03:13.144 SO libspdk_sock.so.10.0 00:03:13.416 SYMLINK libspdk_sock.so 00:03:13.674 CC lib/nvme/nvme_ctrlr.o 00:03:13.674 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:13.674 CC lib/nvme/nvme_fabric.o 00:03:13.674 CC lib/nvme/nvme_ns.o 00:03:13.674 CC lib/nvme/nvme_ns_cmd.o 00:03:13.674 CC lib/nvme/nvme_pcie_common.o 00:03:13.674 CC lib/nvme/nvme_pcie.o 00:03:13.674 CC lib/nvme/nvme.o 00:03:13.674 CC lib/nvme/nvme_qpair.o 00:03:14.240 LIB libspdk_thread.a 00:03:14.240 SO libspdk_thread.so.10.1 00:03:14.499 CC lib/nvme/nvme_quirks.o 00:03:14.499 SYMLINK libspdk_thread.so 00:03:14.499 CC lib/nvme/nvme_transport.o 00:03:14.499 CC lib/nvme/nvme_discovery.o 00:03:14.757 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:14.757 CC lib/accel/accel.o 00:03:15.014 CC lib/accel/accel_rpc.o 00:03:15.014 CC lib/blob/blobstore.o 00:03:15.014 CC lib/init/json_config.o 00:03:15.014 CC lib/init/subsystem.o 00:03:15.014 CC lib/init/subsystem_rpc.o 00:03:15.272 CC lib/virtio/virtio.o 00:03:15.272 CC lib/virtio/virtio_vhost_user.o 00:03:15.272 CC lib/init/rpc.o 00:03:15.272 CC lib/blob/request.o 00:03:15.272 CC lib/blob/zeroes.o 00:03:15.272 CC lib/virtio/virtio_vfio_user.o 00:03:15.272 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:15.528 LIB libspdk_init.a 00:03:15.528 CC lib/blob/blob_bs_dev.o 00:03:15.528 CC lib/virtio/virtio_pci.o 00:03:15.528 SO libspdk_init.so.5.0 00:03:15.528 CC lib/accel/accel_sw.o 00:03:15.528 CC lib/nvme/nvme_tcp.o 00:03:15.528 SYMLINK libspdk_init.so 00:03:15.528 CC lib/nvme/nvme_opal.o 00:03:15.528 CC lib/nvme/nvme_io_msg.o 00:03:15.785 CC lib/nvme/nvme_poll_group.o 00:03:15.785 CC lib/nvme/nvme_zns.o 00:03:15.785 LIB libspdk_accel.a 00:03:15.785 LIB libspdk_virtio.a 00:03:15.785 SO libspdk_accel.so.16.0 00:03:15.785 SO libspdk_virtio.so.7.0 00:03:16.042 SYMLINK libspdk_accel.so 00:03:16.042 SYMLINK libspdk_virtio.so 00:03:16.042 CC lib/nvme/nvme_stubs.o 00:03:16.042 CC lib/nvme/nvme_auth.o 00:03:16.299 CC lib/event/app.o 00:03:16.299 CC lib/event/reactor.o 00:03:16.299 CC lib/nvme/nvme_cuse.o 00:03:16.299 CC lib/nvme/nvme_rdma.o 00:03:16.299 CC lib/event/log_rpc.o 00:03:16.556 CC lib/event/app_rpc.o 00:03:16.556 CC lib/event/scheduler_static.o 00:03:16.812 CC lib/bdev/bdev.o 00:03:16.812 CC lib/bdev/bdev_rpc.o 00:03:16.812 CC lib/bdev/bdev_zone.o 00:03:16.812 CC lib/bdev/part.o 00:03:16.812 LIB libspdk_event.a 00:03:16.812 SO libspdk_event.so.14.0 00:03:17.070 SYMLINK libspdk_event.so 00:03:17.070 CC lib/bdev/scsi_nvme.o 00:03:17.634 LIB libspdk_nvme.a 00:03:17.892 SO libspdk_nvme.so.13.1 00:03:18.148 SYMLINK libspdk_nvme.so 00:03:18.406 LIB libspdk_blob.a 00:03:18.406 SO libspdk_blob.so.11.0 00:03:18.406 SYMLINK libspdk_blob.so 00:03:18.664 CC lib/lvol/lvol.o 00:03:18.664 CC lib/blobfs/blobfs.o 00:03:18.664 CC lib/blobfs/tree.o 00:03:19.228 LIB libspdk_bdev.a 00:03:19.485 SO libspdk_bdev.so.16.0 00:03:19.485 SYMLINK libspdk_bdev.so 00:03:19.485 LIB libspdk_blobfs.a 00:03:19.743 SO libspdk_blobfs.so.10.0 00:03:19.743 LIB libspdk_lvol.a 00:03:19.743 SYMLINK libspdk_blobfs.so 00:03:19.743 SO libspdk_lvol.so.10.0 00:03:19.743 CC lib/nbd/nbd.o 00:03:19.743 CC lib/nbd/nbd_rpc.o 00:03:19.743 CC lib/scsi/dev.o 00:03:19.743 CC lib/scsi/lun.o 00:03:19.743 CC lib/scsi/scsi.o 00:03:19.743 CC lib/scsi/port.o 00:03:19.743 CC lib/ublk/ublk.o 00:03:19.743 CC lib/ftl/ftl_core.o 00:03:19.743 CC lib/nvmf/ctrlr.o 00:03:19.743 SYMLINK libspdk_lvol.so 00:03:19.743 CC lib/nvmf/ctrlr_discovery.o 00:03:20.000 CC lib/scsi/scsi_bdev.o 00:03:20.000 CC lib/scsi/scsi_pr.o 00:03:20.000 CC lib/scsi/scsi_rpc.o 00:03:20.000 CC lib/ublk/ublk_rpc.o 00:03:20.000 CC lib/scsi/task.o 00:03:20.000 CC lib/ftl/ftl_init.o 00:03:20.282 CC lib/nvmf/ctrlr_bdev.o 00:03:20.282 CC lib/ftl/ftl_layout.o 00:03:20.282 LIB libspdk_nbd.a 00:03:20.282 SO libspdk_nbd.so.7.0 00:03:20.282 CC lib/ftl/ftl_debug.o 00:03:20.282 CC lib/nvmf/subsystem.o 00:03:20.282 CC lib/nvmf/nvmf.o 00:03:20.282 CC lib/ftl/ftl_io.o 00:03:20.282 SYMLINK libspdk_nbd.so 00:03:20.282 CC lib/ftl/ftl_sb.o 00:03:20.282 LIB libspdk_ublk.a 00:03:20.541 LIB libspdk_scsi.a 00:03:20.541 SO libspdk_ublk.so.3.0 00:03:20.541 SO libspdk_scsi.so.9.0 00:03:20.541 SYMLINK libspdk_ublk.so 00:03:20.541 CC lib/nvmf/nvmf_rpc.o 00:03:20.541 CC lib/ftl/ftl_l2p.o 00:03:20.541 CC lib/ftl/ftl_l2p_flat.o 00:03:20.541 SYMLINK libspdk_scsi.so 00:03:20.541 CC lib/nvmf/transport.o 00:03:20.541 CC lib/ftl/ftl_nv_cache.o 00:03:20.541 CC lib/ftl/ftl_band.o 00:03:20.801 CC lib/ftl/ftl_band_ops.o 00:03:20.801 CC lib/ftl/ftl_writer.o 00:03:20.801 CC lib/ftl/ftl_rq.o 00:03:21.059 CC lib/ftl/ftl_reloc.o 00:03:21.059 CC lib/ftl/ftl_l2p_cache.o 00:03:21.059 CC lib/ftl/ftl_p2l.o 00:03:21.317 CC lib/iscsi/conn.o 00:03:21.317 CC lib/iscsi/init_grp.o 00:03:21.317 CC lib/iscsi/iscsi.o 00:03:21.317 CC lib/iscsi/md5.o 00:03:21.317 CC lib/iscsi/param.o 00:03:21.317 CC lib/iscsi/portal_grp.o 00:03:21.576 CC lib/nvmf/tcp.o 00:03:21.576 CC lib/iscsi/tgt_node.o 00:03:21.576 CC lib/iscsi/iscsi_subsystem.o 00:03:21.576 CC lib/iscsi/iscsi_rpc.o 00:03:21.576 CC lib/ftl/mngt/ftl_mngt.o 00:03:21.835 CC lib/iscsi/task.o 00:03:21.835 CC lib/vhost/vhost.o 00:03:21.835 CC lib/nvmf/stubs.o 00:03:21.835 CC lib/nvmf/mdns_server.o 00:03:21.835 CC lib/nvmf/rdma.o 00:03:22.093 CC lib/nvmf/auth.o 00:03:22.093 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:22.093 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:22.093 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:22.093 CC lib/vhost/vhost_rpc.o 00:03:22.351 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:22.351 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:22.351 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:22.351 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:22.351 CC lib/vhost/vhost_scsi.o 00:03:22.610 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:22.610 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:22.610 CC lib/vhost/vhost_blk.o 00:03:22.610 CC lib/vhost/rte_vhost_user.o 00:03:22.610 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:22.610 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:22.867 LIB libspdk_iscsi.a 00:03:22.867 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:22.867 SO libspdk_iscsi.so.8.0 00:03:22.867 CC lib/ftl/utils/ftl_conf.o 00:03:22.867 CC lib/ftl/utils/ftl_md.o 00:03:23.125 CC lib/ftl/utils/ftl_mempool.o 00:03:23.125 SYMLINK libspdk_iscsi.so 00:03:23.125 CC lib/ftl/utils/ftl_bitmap.o 00:03:23.125 CC lib/ftl/utils/ftl_property.o 00:03:23.125 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:23.125 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:23.125 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:23.125 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:23.383 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:23.383 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:23.383 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:23.384 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:23.384 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:23.384 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:23.384 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:23.384 CC lib/ftl/base/ftl_base_dev.o 00:03:23.384 CC lib/ftl/base/ftl_base_bdev.o 00:03:23.642 CC lib/ftl/ftl_trace.o 00:03:23.642 LIB libspdk_vhost.a 00:03:23.642 SO libspdk_vhost.so.8.0 00:03:23.642 LIB libspdk_ftl.a 00:03:23.642 SYMLINK libspdk_vhost.so 00:03:23.900 LIB libspdk_nvmf.a 00:03:23.900 SO libspdk_nvmf.so.19.0 00:03:23.900 SO libspdk_ftl.so.9.0 00:03:24.159 SYMLINK libspdk_nvmf.so 00:03:24.416 SYMLINK libspdk_ftl.so 00:03:24.674 CC module/env_dpdk/env_dpdk_rpc.o 00:03:24.932 CC module/accel/dsa/accel_dsa.o 00:03:24.932 CC module/accel/error/accel_error.o 00:03:24.932 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:24.932 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:24.932 CC module/accel/ioat/accel_ioat.o 00:03:24.932 CC module/scheduler/gscheduler/gscheduler.o 00:03:24.932 CC module/blob/bdev/blob_bdev.o 00:03:24.932 CC module/sock/posix/posix.o 00:03:24.932 CC module/keyring/file/keyring.o 00:03:24.932 LIB libspdk_env_dpdk_rpc.a 00:03:24.932 SO libspdk_env_dpdk_rpc.so.6.0 00:03:24.932 SYMLINK libspdk_env_dpdk_rpc.so 00:03:24.932 LIB libspdk_scheduler_gscheduler.a 00:03:24.932 CC module/keyring/file/keyring_rpc.o 00:03:24.932 LIB libspdk_scheduler_dpdk_governor.a 00:03:24.932 CC module/accel/error/accel_error_rpc.o 00:03:24.932 SO libspdk_scheduler_gscheduler.so.4.0 00:03:24.932 CC module/accel/ioat/accel_ioat_rpc.o 00:03:24.932 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:24.932 LIB libspdk_scheduler_dynamic.a 00:03:25.189 SO libspdk_scheduler_dynamic.so.4.0 00:03:25.189 LIB libspdk_blob_bdev.a 00:03:25.189 SYMLINK libspdk_scheduler_gscheduler.so 00:03:25.189 CC module/accel/dsa/accel_dsa_rpc.o 00:03:25.189 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:25.189 SO libspdk_blob_bdev.so.11.0 00:03:25.189 LIB libspdk_keyring_file.a 00:03:25.189 SYMLINK libspdk_scheduler_dynamic.so 00:03:25.189 CC module/keyring/linux/keyring.o 00:03:25.189 CC module/keyring/linux/keyring_rpc.o 00:03:25.189 LIB libspdk_accel_error.a 00:03:25.189 SO libspdk_keyring_file.so.1.0 00:03:25.189 LIB libspdk_accel_ioat.a 00:03:25.189 SYMLINK libspdk_blob_bdev.so 00:03:25.189 SO libspdk_accel_error.so.2.0 00:03:25.189 SO libspdk_accel_ioat.so.6.0 00:03:25.189 SYMLINK libspdk_keyring_file.so 00:03:25.189 LIB libspdk_accel_dsa.a 00:03:25.189 SYMLINK libspdk_accel_ioat.so 00:03:25.189 SO libspdk_accel_dsa.so.5.0 00:03:25.189 SYMLINK libspdk_accel_error.so 00:03:25.189 CC module/accel/iaa/accel_iaa.o 00:03:25.189 LIB libspdk_keyring_linux.a 00:03:25.447 SO libspdk_keyring_linux.so.1.0 00:03:25.447 SYMLINK libspdk_accel_dsa.so 00:03:25.447 SYMLINK libspdk_keyring_linux.so 00:03:25.447 CC module/bdev/error/vbdev_error.o 00:03:25.447 CC module/bdev/lvol/vbdev_lvol.o 00:03:25.447 CC module/bdev/delay/vbdev_delay.o 00:03:25.447 CC module/blobfs/bdev/blobfs_bdev.o 00:03:25.447 CC module/bdev/malloc/bdev_malloc.o 00:03:25.447 CC module/bdev/gpt/gpt.o 00:03:25.447 CC module/accel/iaa/accel_iaa_rpc.o 00:03:25.447 LIB libspdk_sock_posix.a 00:03:25.447 CC module/bdev/null/bdev_null.o 00:03:25.705 SO libspdk_sock_posix.so.6.0 00:03:25.705 CC module/bdev/nvme/bdev_nvme.o 00:03:25.705 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:25.705 CC module/bdev/gpt/vbdev_gpt.o 00:03:25.705 LIB libspdk_accel_iaa.a 00:03:25.705 SYMLINK libspdk_sock_posix.so 00:03:25.705 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:25.705 CC module/bdev/error/vbdev_error_rpc.o 00:03:25.705 SO libspdk_accel_iaa.so.3.0 00:03:25.705 CC module/bdev/null/bdev_null_rpc.o 00:03:25.705 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:25.962 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:25.962 SYMLINK libspdk_accel_iaa.so 00:03:25.962 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:25.962 LIB libspdk_blobfs_bdev.a 00:03:25.962 SO libspdk_blobfs_bdev.so.6.0 00:03:25.962 LIB libspdk_bdev_error.a 00:03:25.962 CC module/bdev/nvme/nvme_rpc.o 00:03:25.962 LIB libspdk_bdev_null.a 00:03:25.962 SO libspdk_bdev_error.so.6.0 00:03:25.962 SYMLINK libspdk_blobfs_bdev.so 00:03:25.962 CC module/bdev/nvme/bdev_mdns_client.o 00:03:25.962 LIB libspdk_bdev_malloc.a 00:03:25.962 LIB libspdk_bdev_gpt.a 00:03:25.962 SO libspdk_bdev_null.so.6.0 00:03:25.962 LIB libspdk_bdev_delay.a 00:03:25.962 SO libspdk_bdev_malloc.so.6.0 00:03:25.962 SO libspdk_bdev_gpt.so.6.0 00:03:25.962 SYMLINK libspdk_bdev_error.so 00:03:25.962 SO libspdk_bdev_delay.so.6.0 00:03:26.219 SYMLINK libspdk_bdev_null.so 00:03:26.219 SYMLINK libspdk_bdev_gpt.so 00:03:26.219 CC module/bdev/nvme/vbdev_opal.o 00:03:26.219 SYMLINK libspdk_bdev_malloc.so 00:03:26.219 SYMLINK libspdk_bdev_delay.so 00:03:26.219 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:26.219 LIB libspdk_bdev_lvol.a 00:03:26.219 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:26.219 SO libspdk_bdev_lvol.so.6.0 00:03:26.219 CC module/bdev/passthru/vbdev_passthru.o 00:03:26.219 CC module/bdev/raid/bdev_raid.o 00:03:26.219 CC module/bdev/split/vbdev_split.o 00:03:26.219 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:26.219 SYMLINK libspdk_bdev_lvol.so 00:03:26.476 CC module/bdev/raid/bdev_raid_rpc.o 00:03:26.476 CC module/bdev/raid/bdev_raid_sb.o 00:03:26.476 CC module/bdev/aio/bdev_aio.o 00:03:26.476 CC module/bdev/split/vbdev_split_rpc.o 00:03:26.476 CC module/bdev/ftl/bdev_ftl.o 00:03:26.476 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:26.476 CC module/bdev/iscsi/bdev_iscsi.o 00:03:26.733 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:26.733 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:26.733 LIB libspdk_bdev_split.a 00:03:26.733 LIB libspdk_bdev_passthru.a 00:03:26.733 SO libspdk_bdev_split.so.6.0 00:03:26.733 SO libspdk_bdev_passthru.so.6.0 00:03:26.733 SYMLINK libspdk_bdev_split.so 00:03:26.733 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:26.733 LIB libspdk_bdev_zone_block.a 00:03:26.733 CC module/bdev/aio/bdev_aio_rpc.o 00:03:26.733 CC module/bdev/raid/raid0.o 00:03:26.733 SYMLINK libspdk_bdev_passthru.so 00:03:26.733 CC module/bdev/raid/raid1.o 00:03:26.992 SO libspdk_bdev_zone_block.so.6.0 00:03:26.992 SYMLINK libspdk_bdev_zone_block.so 00:03:26.992 CC module/bdev/raid/concat.o 00:03:26.992 LIB libspdk_bdev_iscsi.a 00:03:26.992 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:26.992 CC module/bdev/rbd/bdev_rbd.o 00:03:26.992 SO libspdk_bdev_iscsi.so.6.0 00:03:26.992 LIB libspdk_bdev_aio.a 00:03:26.992 LIB libspdk_bdev_ftl.a 00:03:26.992 SO libspdk_bdev_aio.so.6.0 00:03:26.992 SO libspdk_bdev_ftl.so.6.0 00:03:26.992 SYMLINK libspdk_bdev_iscsi.so 00:03:26.992 CC module/bdev/rbd/bdev_rbd_rpc.o 00:03:26.992 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:26.992 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:26.992 SYMLINK libspdk_bdev_aio.so 00:03:27.249 SYMLINK libspdk_bdev_ftl.so 00:03:27.249 LIB libspdk_bdev_raid.a 00:03:27.249 SO libspdk_bdev_raid.so.6.0 00:03:27.506 LIB libspdk_bdev_rbd.a 00:03:27.506 SYMLINK libspdk_bdev_raid.so 00:03:27.506 SO libspdk_bdev_rbd.so.7.0 00:03:27.506 LIB libspdk_bdev_virtio.a 00:03:27.506 SO libspdk_bdev_virtio.so.6.0 00:03:27.506 SYMLINK libspdk_bdev_rbd.so 00:03:27.506 SYMLINK libspdk_bdev_virtio.so 00:03:27.762 LIB libspdk_bdev_nvme.a 00:03:27.762 SO libspdk_bdev_nvme.so.7.0 00:03:28.020 SYMLINK libspdk_bdev_nvme.so 00:03:28.587 CC module/event/subsystems/scheduler/scheduler.o 00:03:28.587 CC module/event/subsystems/iobuf/iobuf.o 00:03:28.587 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:28.587 CC module/event/subsystems/vmd/vmd.o 00:03:28.587 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:28.587 CC module/event/subsystems/sock/sock.o 00:03:28.587 CC module/event/subsystems/keyring/keyring.o 00:03:28.587 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:28.587 LIB libspdk_event_vmd.a 00:03:28.587 LIB libspdk_event_keyring.a 00:03:28.587 LIB libspdk_event_scheduler.a 00:03:28.850 SO libspdk_event_vmd.so.6.0 00:03:28.850 SO libspdk_event_keyring.so.1.0 00:03:28.850 LIB libspdk_event_sock.a 00:03:28.850 SO libspdk_event_scheduler.so.4.0 00:03:28.850 SO libspdk_event_sock.so.5.0 00:03:28.850 SYMLINK libspdk_event_vmd.so 00:03:28.850 LIB libspdk_event_iobuf.a 00:03:28.850 SYMLINK libspdk_event_keyring.so 00:03:28.850 SYMLINK libspdk_event_scheduler.so 00:03:28.850 LIB libspdk_event_vhost_blk.a 00:03:28.850 SYMLINK libspdk_event_sock.so 00:03:28.850 SO libspdk_event_iobuf.so.3.0 00:03:28.850 SO libspdk_event_vhost_blk.so.3.0 00:03:28.850 SYMLINK libspdk_event_vhost_blk.so 00:03:28.850 SYMLINK libspdk_event_iobuf.so 00:03:29.434 CC module/event/subsystems/accel/accel.o 00:03:29.434 LIB libspdk_event_accel.a 00:03:29.434 SO libspdk_event_accel.so.6.0 00:03:29.434 SYMLINK libspdk_event_accel.so 00:03:30.000 CC module/event/subsystems/bdev/bdev.o 00:03:30.000 LIB libspdk_event_bdev.a 00:03:30.256 SO libspdk_event_bdev.so.6.0 00:03:30.256 SYMLINK libspdk_event_bdev.so 00:03:30.514 CC module/event/subsystems/scsi/scsi.o 00:03:30.514 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:30.514 CC module/event/subsystems/nbd/nbd.o 00:03:30.514 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:30.514 CC module/event/subsystems/ublk/ublk.o 00:03:30.771 LIB libspdk_event_nbd.a 00:03:30.771 LIB libspdk_event_scsi.a 00:03:30.771 LIB libspdk_event_ublk.a 00:03:30.771 SO libspdk_event_nbd.so.6.0 00:03:30.771 LIB libspdk_event_nvmf.a 00:03:30.771 SO libspdk_event_scsi.so.6.0 00:03:30.771 SO libspdk_event_ublk.so.3.0 00:03:30.771 SYMLINK libspdk_event_nbd.so 00:03:30.771 SO libspdk_event_nvmf.so.6.0 00:03:30.771 SYMLINK libspdk_event_ublk.so 00:03:30.771 SYMLINK libspdk_event_scsi.so 00:03:30.771 SYMLINK libspdk_event_nvmf.so 00:03:31.029 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:31.029 CC module/event/subsystems/iscsi/iscsi.o 00:03:31.287 LIB libspdk_event_vhost_scsi.a 00:03:31.287 LIB libspdk_event_iscsi.a 00:03:31.287 SO libspdk_event_vhost_scsi.so.3.0 00:03:31.287 SO libspdk_event_iscsi.so.6.0 00:03:31.287 SYMLINK libspdk_event_vhost_scsi.so 00:03:31.287 SYMLINK libspdk_event_iscsi.so 00:03:31.545 SO libspdk.so.6.0 00:03:31.545 SYMLINK libspdk.so 00:03:31.804 CC test/rpc_client/rpc_client_test.o 00:03:31.804 CXX app/trace/trace.o 00:03:31.804 TEST_HEADER include/spdk/accel.h 00:03:31.804 CC app/trace_record/trace_record.o 00:03:31.804 TEST_HEADER include/spdk/accel_module.h 00:03:31.804 TEST_HEADER include/spdk/assert.h 00:03:31.804 TEST_HEADER include/spdk/barrier.h 00:03:31.804 TEST_HEADER include/spdk/base64.h 00:03:31.804 TEST_HEADER include/spdk/bdev.h 00:03:31.804 TEST_HEADER include/spdk/bdev_module.h 00:03:31.804 TEST_HEADER include/spdk/bdev_zone.h 00:03:31.804 TEST_HEADER include/spdk/bit_array.h 00:03:31.804 TEST_HEADER include/spdk/bit_pool.h 00:03:31.804 TEST_HEADER include/spdk/blob_bdev.h 00:03:31.804 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:31.804 TEST_HEADER include/spdk/blobfs.h 00:03:31.804 TEST_HEADER include/spdk/blob.h 00:03:31.804 TEST_HEADER include/spdk/conf.h 00:03:31.804 TEST_HEADER include/spdk/config.h 00:03:31.804 CC app/nvmf_tgt/nvmf_main.o 00:03:31.804 TEST_HEADER include/spdk/cpuset.h 00:03:31.804 TEST_HEADER include/spdk/crc16.h 00:03:31.804 TEST_HEADER include/spdk/crc32.h 00:03:31.804 TEST_HEADER include/spdk/crc64.h 00:03:31.804 TEST_HEADER include/spdk/dif.h 00:03:31.804 TEST_HEADER include/spdk/dma.h 00:03:31.804 TEST_HEADER include/spdk/endian.h 00:03:31.804 TEST_HEADER include/spdk/env_dpdk.h 00:03:31.804 TEST_HEADER include/spdk/env.h 00:03:31.804 TEST_HEADER include/spdk/event.h 00:03:31.804 TEST_HEADER include/spdk/fd_group.h 00:03:31.804 TEST_HEADER include/spdk/fd.h 00:03:31.804 TEST_HEADER include/spdk/file.h 00:03:32.065 TEST_HEADER include/spdk/ftl.h 00:03:32.065 TEST_HEADER include/spdk/gpt_spec.h 00:03:32.065 TEST_HEADER include/spdk/hexlify.h 00:03:32.065 TEST_HEADER include/spdk/histogram_data.h 00:03:32.065 TEST_HEADER include/spdk/idxd.h 00:03:32.065 TEST_HEADER include/spdk/idxd_spec.h 00:03:32.065 TEST_HEADER include/spdk/init.h 00:03:32.065 TEST_HEADER include/spdk/ioat.h 00:03:32.065 TEST_HEADER include/spdk/ioat_spec.h 00:03:32.065 TEST_HEADER include/spdk/iscsi_spec.h 00:03:32.065 TEST_HEADER include/spdk/json.h 00:03:32.065 TEST_HEADER include/spdk/jsonrpc.h 00:03:32.065 TEST_HEADER include/spdk/keyring.h 00:03:32.065 TEST_HEADER include/spdk/keyring_module.h 00:03:32.065 CC test/thread/poller_perf/poller_perf.o 00:03:32.065 TEST_HEADER include/spdk/likely.h 00:03:32.065 TEST_HEADER include/spdk/log.h 00:03:32.065 TEST_HEADER include/spdk/lvol.h 00:03:32.065 TEST_HEADER include/spdk/memory.h 00:03:32.065 TEST_HEADER include/spdk/mmio.h 00:03:32.065 CC examples/util/zipf/zipf.o 00:03:32.065 TEST_HEADER include/spdk/nbd.h 00:03:32.065 TEST_HEADER include/spdk/net.h 00:03:32.065 CC test/app/bdev_svc/bdev_svc.o 00:03:32.065 TEST_HEADER include/spdk/notify.h 00:03:32.065 TEST_HEADER include/spdk/nvme.h 00:03:32.065 TEST_HEADER include/spdk/nvme_intel.h 00:03:32.065 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:32.065 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:32.065 TEST_HEADER include/spdk/nvme_spec.h 00:03:32.065 TEST_HEADER include/spdk/nvme_zns.h 00:03:32.065 CC test/dma/test_dma/test_dma.o 00:03:32.065 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:32.065 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:32.065 TEST_HEADER include/spdk/nvmf.h 00:03:32.065 TEST_HEADER include/spdk/nvmf_spec.h 00:03:32.065 TEST_HEADER include/spdk/nvmf_transport.h 00:03:32.065 TEST_HEADER include/spdk/opal.h 00:03:32.065 TEST_HEADER include/spdk/opal_spec.h 00:03:32.065 TEST_HEADER include/spdk/pci_ids.h 00:03:32.065 TEST_HEADER include/spdk/pipe.h 00:03:32.065 TEST_HEADER include/spdk/queue.h 00:03:32.065 TEST_HEADER include/spdk/reduce.h 00:03:32.065 TEST_HEADER include/spdk/rpc.h 00:03:32.065 CC test/env/mem_callbacks/mem_callbacks.o 00:03:32.065 TEST_HEADER include/spdk/scheduler.h 00:03:32.065 TEST_HEADER include/spdk/scsi.h 00:03:32.065 TEST_HEADER include/spdk/scsi_spec.h 00:03:32.065 TEST_HEADER include/spdk/sock.h 00:03:32.065 TEST_HEADER include/spdk/stdinc.h 00:03:32.065 TEST_HEADER include/spdk/string.h 00:03:32.065 LINK rpc_client_test 00:03:32.065 TEST_HEADER include/spdk/thread.h 00:03:32.065 TEST_HEADER include/spdk/trace.h 00:03:32.065 TEST_HEADER include/spdk/trace_parser.h 00:03:32.065 TEST_HEADER include/spdk/tree.h 00:03:32.065 TEST_HEADER include/spdk/ublk.h 00:03:32.065 TEST_HEADER include/spdk/util.h 00:03:32.065 TEST_HEADER include/spdk/uuid.h 00:03:32.065 TEST_HEADER include/spdk/version.h 00:03:32.065 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:32.065 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:32.065 TEST_HEADER include/spdk/vhost.h 00:03:32.065 TEST_HEADER include/spdk/vmd.h 00:03:32.065 TEST_HEADER include/spdk/xor.h 00:03:32.065 TEST_HEADER include/spdk/zipf.h 00:03:32.065 CXX test/cpp_headers/accel.o 00:03:32.065 LINK nvmf_tgt 00:03:32.323 LINK poller_perf 00:03:32.323 LINK spdk_trace_record 00:03:32.323 LINK zipf 00:03:32.323 LINK bdev_svc 00:03:32.323 LINK spdk_trace 00:03:32.323 CXX test/cpp_headers/accel_module.o 00:03:32.580 CC app/iscsi_tgt/iscsi_tgt.o 00:03:32.580 LINK test_dma 00:03:32.580 CXX test/cpp_headers/assert.o 00:03:32.580 CC app/spdk_tgt/spdk_tgt.o 00:03:32.580 CC app/spdk_lspci/spdk_lspci.o 00:03:32.580 CC test/event/event_perf/event_perf.o 00:03:32.580 CC examples/ioat/perf/perf.o 00:03:32.580 LINK iscsi_tgt 00:03:32.580 CXX test/cpp_headers/barrier.o 00:03:32.580 LINK mem_callbacks 00:03:32.580 CC examples/vmd/lsvmd/lsvmd.o 00:03:32.838 LINK spdk_lspci 00:03:32.838 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:32.838 LINK spdk_tgt 00:03:32.838 CXX test/cpp_headers/base64.o 00:03:32.838 LINK event_perf 00:03:32.838 LINK lsvmd 00:03:32.838 LINK ioat_perf 00:03:32.838 CXX test/cpp_headers/bdev.o 00:03:32.838 CC test/env/vtophys/vtophys.o 00:03:32.838 CC app/spdk_nvme_perf/perf.o 00:03:32.838 CXX test/cpp_headers/bdev_module.o 00:03:33.097 CC test/app/histogram_perf/histogram_perf.o 00:03:33.097 CXX test/cpp_headers/bdev_zone.o 00:03:33.097 CC test/event/reactor/reactor.o 00:03:33.097 LINK vtophys 00:03:33.097 CC examples/vmd/led/led.o 00:03:33.097 LINK histogram_perf 00:03:33.097 CC examples/ioat/verify/verify.o 00:03:33.097 CXX test/cpp_headers/bit_array.o 00:03:33.097 LINK reactor 00:03:33.097 CC app/spdk_nvme_identify/identify.o 00:03:33.355 LINK nvme_fuzz 00:03:33.355 LINK led 00:03:33.355 CC examples/idxd/perf/perf.o 00:03:33.355 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:33.355 CXX test/cpp_headers/bit_pool.o 00:03:33.355 LINK verify 00:03:33.355 CC test/env/memory/memory_ut.o 00:03:33.355 CC test/event/reactor_perf/reactor_perf.o 00:03:33.613 LINK env_dpdk_post_init 00:03:33.613 CXX test/cpp_headers/blob_bdev.o 00:03:33.614 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:33.614 CC test/event/app_repeat/app_repeat.o 00:03:33.614 LINK reactor_perf 00:03:33.614 CC test/env/pci/pci_ut.o 00:03:33.614 LINK spdk_nvme_perf 00:03:33.614 LINK idxd_perf 00:03:33.614 CXX test/cpp_headers/blobfs_bdev.o 00:03:33.872 LINK app_repeat 00:03:33.872 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:33.872 CXX test/cpp_headers/blobfs.o 00:03:33.872 CXX test/cpp_headers/blob.o 00:03:33.872 LINK spdk_nvme_identify 00:03:34.130 LINK interrupt_tgt 00:03:34.130 CC examples/thread/thread/thread_ex.o 00:03:34.130 LINK pci_ut 00:03:34.130 CC test/app/jsoncat/jsoncat.o 00:03:34.130 CXX test/cpp_headers/conf.o 00:03:34.130 CC test/event/scheduler/scheduler.o 00:03:34.130 CXX test/cpp_headers/config.o 00:03:34.130 LINK jsoncat 00:03:34.388 CC app/spdk_nvme_discover/discovery_aer.o 00:03:34.388 CC test/accel/dif/dif.o 00:03:34.388 LINK thread 00:03:34.388 CXX test/cpp_headers/cpuset.o 00:03:34.388 LINK scheduler 00:03:34.388 LINK spdk_nvme_discover 00:03:34.388 CC test/blobfs/mkfs/mkfs.o 00:03:34.388 LINK memory_ut 00:03:34.646 CXX test/cpp_headers/crc16.o 00:03:34.646 CC test/nvme/aer/aer.o 00:03:34.646 CC test/lvol/esnap/esnap.o 00:03:34.646 CXX test/cpp_headers/crc32.o 00:03:34.646 LINK mkfs 00:03:34.646 LINK dif 00:03:34.646 CC app/spdk_top/spdk_top.o 00:03:34.904 CC examples/sock/hello_world/hello_sock.o 00:03:34.904 CC examples/accel/perf/accel_perf.o 00:03:34.904 CXX test/cpp_headers/crc64.o 00:03:34.904 LINK aer 00:03:34.904 CC examples/blob/hello_world/hello_blob.o 00:03:35.162 CXX test/cpp_headers/dif.o 00:03:35.162 CXX test/cpp_headers/dma.o 00:03:35.162 CXX test/cpp_headers/endian.o 00:03:35.162 LINK hello_sock 00:03:35.162 CC test/nvme/reset/reset.o 00:03:35.162 LINK iscsi_fuzz 00:03:35.162 LINK hello_blob 00:03:35.162 CXX test/cpp_headers/env_dpdk.o 00:03:35.162 LINK accel_perf 00:03:35.421 CC test/nvme/sgl/sgl.o 00:03:35.421 CC examples/blob/cli/blobcli.o 00:03:35.421 CC test/nvme/e2edp/nvme_dp.o 00:03:35.421 LINK reset 00:03:35.421 CXX test/cpp_headers/env.o 00:03:35.421 CXX test/cpp_headers/event.o 00:03:35.421 CXX test/cpp_headers/fd_group.o 00:03:35.421 CXX test/cpp_headers/fd.o 00:03:35.421 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:35.679 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:35.679 LINK sgl 00:03:35.679 LINK spdk_top 00:03:35.679 LINK nvme_dp 00:03:35.679 CXX test/cpp_headers/file.o 00:03:35.679 CC test/nvme/overhead/overhead.o 00:03:35.679 LINK blobcli 00:03:35.679 CC examples/nvme/hello_world/hello_world.o 00:03:35.937 CC examples/nvme/reconnect/reconnect.o 00:03:35.937 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:35.937 CXX test/cpp_headers/ftl.o 00:03:35.937 CC examples/bdev/hello_world/hello_bdev.o 00:03:35.937 CC app/vhost/vhost.o 00:03:35.937 LINK vhost_fuzz 00:03:35.937 LINK hello_world 00:03:35.937 CXX test/cpp_headers/gpt_spec.o 00:03:35.937 LINK overhead 00:03:36.195 LINK vhost 00:03:36.195 LINK hello_bdev 00:03:36.195 CC test/nvme/err_injection/err_injection.o 00:03:36.195 CXX test/cpp_headers/hexlify.o 00:03:36.195 CC test/app/stub/stub.o 00:03:36.195 LINK reconnect 00:03:36.195 CC test/nvme/startup/startup.o 00:03:36.453 LINK nvme_manage 00:03:36.454 CC test/nvme/reserve/reserve.o 00:03:36.454 CXX test/cpp_headers/histogram_data.o 00:03:36.454 LINK err_injection 00:03:36.454 LINK stub 00:03:36.454 CXX test/cpp_headers/idxd.o 00:03:36.454 CC app/spdk_dd/spdk_dd.o 00:03:36.454 LINK startup 00:03:36.454 CC examples/bdev/bdevperf/bdevperf.o 00:03:36.454 LINK reserve 00:03:36.712 CXX test/cpp_headers/idxd_spec.o 00:03:36.712 CC examples/nvme/arbitration/arbitration.o 00:03:36.712 CC examples/nvme/hotplug/hotplug.o 00:03:36.712 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:36.712 CC examples/nvme/abort/abort.o 00:03:36.712 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:36.712 CXX test/cpp_headers/init.o 00:03:36.970 LINK spdk_dd 00:03:36.970 CC test/nvme/simple_copy/simple_copy.o 00:03:36.970 LINK cmb_copy 00:03:36.970 LINK hotplug 00:03:36.970 CXX test/cpp_headers/ioat.o 00:03:36.970 LINK pmr_persistence 00:03:36.970 LINK arbitration 00:03:36.970 CXX test/cpp_headers/ioat_spec.o 00:03:37.228 LINK simple_copy 00:03:37.228 LINK abort 00:03:37.228 CXX test/cpp_headers/iscsi_spec.o 00:03:37.228 CC test/nvme/connect_stress/connect_stress.o 00:03:37.228 CC test/nvme/boot_partition/boot_partition.o 00:03:37.228 LINK bdevperf 00:03:37.228 CC test/nvme/compliance/nvme_compliance.o 00:03:37.228 CXX test/cpp_headers/json.o 00:03:37.228 CC app/fio/nvme/fio_plugin.o 00:03:37.487 CC test/nvme/fused_ordering/fused_ordering.o 00:03:37.487 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:37.487 CC app/fio/bdev/fio_plugin.o 00:03:37.487 LINK boot_partition 00:03:37.487 LINK connect_stress 00:03:37.487 CXX test/cpp_headers/jsonrpc.o 00:03:37.487 LINK fused_ordering 00:03:37.487 LINK doorbell_aers 00:03:37.745 LINK nvme_compliance 00:03:37.746 CXX test/cpp_headers/keyring.o 00:03:37.746 CC test/nvme/fdp/fdp.o 00:03:37.746 CC test/nvme/cuse/cuse.o 00:03:37.746 CC examples/nvmf/nvmf/nvmf.o 00:03:37.746 CXX test/cpp_headers/keyring_module.o 00:03:37.746 CXX test/cpp_headers/likely.o 00:03:38.033 LINK spdk_nvme 00:03:38.033 CXX test/cpp_headers/log.o 00:03:38.033 LINK spdk_bdev 00:03:38.033 CXX test/cpp_headers/lvol.o 00:03:38.033 CXX test/cpp_headers/memory.o 00:03:38.033 CXX test/cpp_headers/mmio.o 00:03:38.033 CC test/bdev/bdevio/bdevio.o 00:03:38.033 CXX test/cpp_headers/nbd.o 00:03:38.033 LINK fdp 00:03:38.033 CXX test/cpp_headers/net.o 00:03:38.033 LINK nvmf 00:03:38.033 CXX test/cpp_headers/notify.o 00:03:38.297 CXX test/cpp_headers/nvme.o 00:03:38.297 CXX test/cpp_headers/nvme_intel.o 00:03:38.297 CXX test/cpp_headers/nvme_ocssd.o 00:03:38.297 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:38.297 CXX test/cpp_headers/nvme_spec.o 00:03:38.297 CXX test/cpp_headers/nvme_zns.o 00:03:38.555 CXX test/cpp_headers/nvmf_cmd.o 00:03:38.555 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:38.555 CXX test/cpp_headers/nvmf.o 00:03:38.555 LINK bdevio 00:03:38.555 CXX test/cpp_headers/nvmf_spec.o 00:03:38.555 CXX test/cpp_headers/nvmf_transport.o 00:03:38.555 CXX test/cpp_headers/opal.o 00:03:38.555 CXX test/cpp_headers/opal_spec.o 00:03:38.555 CXX test/cpp_headers/pci_ids.o 00:03:38.555 CXX test/cpp_headers/pipe.o 00:03:38.814 CXX test/cpp_headers/queue.o 00:03:38.814 CXX test/cpp_headers/reduce.o 00:03:38.814 CXX test/cpp_headers/rpc.o 00:03:38.814 CXX test/cpp_headers/scheduler.o 00:03:38.814 CXX test/cpp_headers/scsi.o 00:03:38.814 CXX test/cpp_headers/scsi_spec.o 00:03:38.814 CXX test/cpp_headers/sock.o 00:03:38.814 CXX test/cpp_headers/stdinc.o 00:03:38.814 CXX test/cpp_headers/string.o 00:03:38.814 CXX test/cpp_headers/thread.o 00:03:38.814 CXX test/cpp_headers/trace.o 00:03:39.073 CXX test/cpp_headers/trace_parser.o 00:03:39.073 CXX test/cpp_headers/tree.o 00:03:39.073 CXX test/cpp_headers/ublk.o 00:03:39.073 CXX test/cpp_headers/util.o 00:03:39.073 CXX test/cpp_headers/uuid.o 00:03:39.073 CXX test/cpp_headers/version.o 00:03:39.073 CXX test/cpp_headers/vfio_user_pci.o 00:03:39.073 LINK cuse 00:03:39.073 CXX test/cpp_headers/vfio_user_spec.o 00:03:39.073 CXX test/cpp_headers/vhost.o 00:03:39.073 CXX test/cpp_headers/vmd.o 00:03:39.073 CXX test/cpp_headers/xor.o 00:03:39.331 CXX test/cpp_headers/zipf.o 00:03:39.589 LINK esnap 00:03:40.524 00:03:40.524 real 1m7.987s 00:03:40.524 user 6m46.963s 00:03:40.524 sys 1m53.647s 00:03:40.524 10:04:13 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:40.524 10:04:13 make -- common/autotest_common.sh@10 -- $ set +x 00:03:40.524 ************************************ 00:03:40.524 END TEST make 00:03:40.524 ************************************ 00:03:40.524 10:04:13 -- common/autotest_common.sh@1142 -- $ return 0 00:03:40.524 10:04:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:40.524 10:04:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:40.524 10:04:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:40.524 10:04:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.524 10:04:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:40.524 10:04:13 -- pm/common@44 -- $ pid=5207 00:03:40.524 10:04:13 -- pm/common@50 -- $ kill -TERM 5207 00:03:40.524 10:04:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.524 10:04:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:40.524 10:04:13 -- pm/common@44 -- $ pid=5209 00:03:40.524 10:04:13 -- pm/common@50 -- $ kill -TERM 5209 00:03:40.524 10:04:13 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:40.524 10:04:13 -- nvmf/common.sh@7 -- # uname -s 00:03:40.524 10:04:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:40.524 10:04:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:40.524 10:04:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:40.524 10:04:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:40.524 10:04:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:40.524 10:04:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:40.524 10:04:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:40.524 10:04:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:40.524 10:04:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:40.524 10:04:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:40.524 10:04:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:828469e8-3269-4fb6-840b-068387b38e35 00:03:40.524 10:04:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=828469e8-3269-4fb6-840b-068387b38e35 00:03:40.524 10:04:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:40.524 10:04:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:40.524 10:04:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:40.524 10:04:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:40.524 10:04:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:40.524 10:04:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:40.524 10:04:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:40.524 10:04:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:40.524 10:04:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.524 10:04:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.524 10:04:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.524 10:04:13 -- paths/export.sh@5 -- # export PATH 00:03:40.524 10:04:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.524 10:04:13 -- nvmf/common.sh@47 -- # : 0 00:03:40.524 10:04:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:40.524 10:04:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:40.524 10:04:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:40.524 10:04:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:40.524 10:04:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:40.524 10:04:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:40.524 10:04:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:40.524 10:04:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:40.524 10:04:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:40.524 10:04:13 -- spdk/autotest.sh@32 -- # uname -s 00:03:40.524 10:04:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:40.524 10:04:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:40.524 10:04:13 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:40.524 10:04:13 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:40.524 10:04:13 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:40.524 10:04:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:40.524 10:04:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:40.524 10:04:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:40.524 10:04:13 -- spdk/autotest.sh@48 -- # udevadm_pid=52826 00:03:40.524 10:04:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:40.524 10:04:13 -- pm/common@17 -- # local monitor 00:03:40.524 10:04:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.524 10:04:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.524 10:04:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:40.524 10:04:13 -- pm/common@25 -- # sleep 1 00:03:40.524 10:04:13 -- pm/common@21 -- # date +%s 00:03:40.524 10:04:13 -- pm/common@21 -- # date +%s 00:03:40.524 10:04:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721901853 00:03:40.524 10:04:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721901853 00:03:40.524 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721901853_collect-vmstat.pm.log 00:03:40.524 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721901853_collect-cpu-load.pm.log 00:03:41.458 10:04:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:41.458 10:04:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:41.458 10:04:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:41.716 10:04:14 -- common/autotest_common.sh@10 -- # set +x 00:03:41.716 10:04:14 -- spdk/autotest.sh@59 -- # create_test_list 00:03:41.716 10:04:14 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:41.716 10:04:14 -- common/autotest_common.sh@10 -- # set +x 00:03:41.716 10:04:14 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:41.716 10:04:14 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:41.716 10:04:14 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:41.716 10:04:14 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:41.716 10:04:14 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:41.716 10:04:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:41.716 10:04:14 -- common/autotest_common.sh@1455 -- # uname 00:03:41.716 10:04:14 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:41.716 10:04:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:41.716 10:04:14 -- common/autotest_common.sh@1475 -- # uname 00:03:41.716 10:04:14 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:41.716 10:04:14 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:41.716 10:04:14 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:41.716 10:04:14 -- spdk/autotest.sh@72 -- # hash lcov 00:03:41.716 10:04:14 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:41.716 10:04:14 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:41.716 --rc lcov_branch_coverage=1 00:03:41.716 --rc lcov_function_coverage=1 00:03:41.716 --rc genhtml_branch_coverage=1 00:03:41.716 --rc genhtml_function_coverage=1 00:03:41.716 --rc genhtml_legend=1 00:03:41.716 --rc geninfo_all_blocks=1 00:03:41.716 ' 00:03:41.716 10:04:14 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:41.716 --rc lcov_branch_coverage=1 00:03:41.716 --rc lcov_function_coverage=1 00:03:41.716 --rc genhtml_branch_coverage=1 00:03:41.716 --rc genhtml_function_coverage=1 00:03:41.716 --rc genhtml_legend=1 00:03:41.716 --rc geninfo_all_blocks=1 00:03:41.716 ' 00:03:41.716 10:04:14 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:41.716 --rc lcov_branch_coverage=1 00:03:41.716 --rc lcov_function_coverage=1 00:03:41.716 --rc genhtml_branch_coverage=1 00:03:41.716 --rc genhtml_function_coverage=1 00:03:41.716 --rc genhtml_legend=1 00:03:41.716 --rc geninfo_all_blocks=1 00:03:41.716 --no-external' 00:03:41.716 10:04:14 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:41.716 --rc lcov_branch_coverage=1 00:03:41.716 --rc lcov_function_coverage=1 00:03:41.716 --rc genhtml_branch_coverage=1 00:03:41.716 --rc genhtml_function_coverage=1 00:03:41.716 --rc genhtml_legend=1 00:03:41.716 --rc geninfo_all_blocks=1 00:03:41.716 --no-external' 00:03:41.716 10:04:14 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:41.716 lcov: LCOV version 1.14 00:03:41.716 10:04:14 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:59.789 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:59.789 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:14.658 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:14.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:14.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:14.660 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:14.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:17.192 10:04:50 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:17.192 10:04:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:17.192 10:04:50 -- common/autotest_common.sh@10 -- # set +x 00:04:17.192 10:04:50 -- spdk/autotest.sh@91 -- # rm -f 00:04:17.192 10:04:50 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.757 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:17.757 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:17.757 10:04:50 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:17.757 10:04:50 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:17.757 10:04:50 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:17.757 10:04:50 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:17.757 10:04:50 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:17.757 10:04:50 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:17.757 10:04:50 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:17.757 10:04:50 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:17.757 10:04:50 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:17.757 10:04:50 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:17.757 10:04:50 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:17.757 10:04:50 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:17.757 10:04:50 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:17.757 10:04:50 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:17.757 10:04:50 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:17.757 10:04:50 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:17.757 10:04:50 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:17.757 10:04:50 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:17.757 10:04:50 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:17.757 10:04:50 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:17.758 10:04:50 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:17.758 10:04:50 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:17.758 10:04:50 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:17.758 10:04:50 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:17.758 10:04:50 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:17.758 10:04:50 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.758 10:04:50 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:17.758 10:04:50 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:17.758 10:04:50 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:17.758 10:04:50 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:18.016 No valid GPT data, bailing 00:04:18.016 10:04:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:18.016 10:04:51 -- scripts/common.sh@391 -- # pt= 00:04:18.016 10:04:51 -- scripts/common.sh@392 -- # return 1 00:04:18.016 10:04:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:18.016 1+0 records in 00:04:18.016 1+0 records out 00:04:18.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00573011 s, 183 MB/s 00:04:18.016 10:04:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.016 10:04:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:18.016 10:04:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:18.016 10:04:51 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:18.016 10:04:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:18.016 No valid GPT data, bailing 00:04:18.016 10:04:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:18.016 10:04:51 -- scripts/common.sh@391 -- # pt= 00:04:18.016 10:04:51 -- scripts/common.sh@392 -- # return 1 00:04:18.016 10:04:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:18.016 1+0 records in 00:04:18.016 1+0 records out 00:04:18.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481389 s, 218 MB/s 00:04:18.016 10:04:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.016 10:04:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:18.016 10:04:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:18.016 10:04:51 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:18.016 10:04:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:18.016 No valid GPT data, bailing 00:04:18.016 10:04:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:18.016 10:04:51 -- scripts/common.sh@391 -- # pt= 00:04:18.016 10:04:51 -- scripts/common.sh@392 -- # return 1 00:04:18.016 10:04:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:18.016 1+0 records in 00:04:18.016 1+0 records out 00:04:18.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00573277 s, 183 MB/s 00:04:18.017 10:04:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.017 10:04:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:18.017 10:04:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:18.017 10:04:51 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:18.017 10:04:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:18.275 No valid GPT data, bailing 00:04:18.275 10:04:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:18.275 10:04:51 -- scripts/common.sh@391 -- # pt= 00:04:18.275 10:04:51 -- scripts/common.sh@392 -- # return 1 00:04:18.275 10:04:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:18.275 1+0 records in 00:04:18.275 1+0 records out 00:04:18.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442676 s, 237 MB/s 00:04:18.275 10:04:51 -- spdk/autotest.sh@118 -- # sync 00:04:18.275 10:04:51 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:18.275 10:04:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:18.275 10:04:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:20.806 10:04:53 -- spdk/autotest.sh@124 -- # uname -s 00:04:20.806 10:04:53 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:20.806 10:04:53 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:20.806 10:04:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.806 10:04:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.806 10:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:20.806 ************************************ 00:04:20.806 START TEST setup.sh 00:04:20.806 ************************************ 00:04:20.806 10:04:53 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:20.806 * Looking for test storage... 00:04:20.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:20.806 10:04:53 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:20.806 10:04:53 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:20.806 10:04:53 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:20.806 10:04:53 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.806 10:04:53 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.806 10:04:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:20.806 ************************************ 00:04:20.806 START TEST acl 00:04:20.806 ************************************ 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:20.806 * Looking for test storage... 00:04:20.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:20.806 10:04:53 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:20.806 10:04:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.806 10:04:53 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:20.806 10:04:53 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:20.806 10:04:53 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:20.806 10:04:53 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:20.806 10:04:53 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:20.806 10:04:53 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.806 10:04:53 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.373 10:04:54 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:21.373 10:04:54 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:21.373 10:04:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.373 10:04:54 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:21.373 10:04:54 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.373 10:04:54 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.309 Hugepages 00:04:22.309 node hugesize free / total 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.309 00:04:22.309 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:22.309 10:04:55 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:22.309 10:04:55 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.309 10:04:55 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.309 10:04:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:22.309 ************************************ 00:04:22.309 START TEST denied 00:04:22.309 ************************************ 00:04:22.309 10:04:55 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:22.309 10:04:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:22.309 10:04:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:22.309 10:04:55 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:22.309 10:04:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.309 10:04:55 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.682 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:23.682 10:04:56 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:23.683 10:04:56 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:23.683 10:04:56 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:23.683 10:04:56 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:23.683 10:04:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:23.683 10:04:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:23.683 10:04:56 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:23.683 10:04:56 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:23.683 10:04:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.683 10:04:56 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.941 00:04:23.941 real 0m1.583s 00:04:23.941 user 0m0.597s 00:04:23.941 sys 0m0.943s 00:04:23.941 10:04:57 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.941 10:04:57 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:23.941 ************************************ 00:04:23.941 END TEST denied 00:04:23.941 ************************************ 00:04:23.941 10:04:57 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:23.941 10:04:57 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:23.941 10:04:57 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.941 10:04:57 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.941 10:04:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:23.941 ************************************ 00:04:23.941 START TEST allowed 00:04:23.941 ************************************ 00:04:23.941 10:04:57 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:23.941 10:04:57 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:23.941 10:04:57 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:23.941 10:04:57 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.941 10:04:57 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.941 10:04:57 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:24.876 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.876 10:04:58 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:24.876 10:04:58 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:24.876 10:04:58 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:24.876 10:04:58 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:24.876 10:04:58 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:24.876 10:04:58 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:24.876 10:04:58 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:24.876 10:04:58 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:24.876 10:04:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.876 10:04:58 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:25.810 00:04:25.810 real 0m1.689s 00:04:25.810 user 0m0.683s 00:04:25.810 sys 0m1.025s 00:04:25.810 10:04:58 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.810 10:04:58 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:25.810 ************************************ 00:04:25.810 END TEST allowed 00:04:25.810 ************************************ 00:04:25.810 10:04:58 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:25.810 00:04:25.810 real 0m5.228s 00:04:25.810 user 0m2.143s 00:04:25.810 sys 0m3.092s 00:04:25.810 10:04:58 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.810 10:04:58 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:25.810 ************************************ 00:04:25.810 END TEST acl 00:04:25.810 ************************************ 00:04:25.810 10:04:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:25.810 10:04:58 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:25.810 10:04:58 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.810 10:04:58 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.810 10:04:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.810 ************************************ 00:04:25.810 START TEST hugepages 00:04:25.810 ************************************ 00:04:25.810 10:04:58 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:25.810 * Looking for test storage... 00:04:25.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:25.810 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:25.810 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:25.810 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:25.810 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:25.810 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:25.810 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:25.810 10:04:59 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:25.810 10:04:59 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:25.810 10:04:59 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:25.810 10:04:59 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:25.810 10:04:59 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.810 10:04:59 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6023048 kB' 'MemAvailable: 7415268 kB' 'Buffers: 2436 kB' 'Cached: 1606396 kB' 'SwapCached: 0 kB' 'Active: 439904 kB' 'Inactive: 1273824 kB' 'Active(anon): 115384 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'AnonPages: 106584 kB' 'Mapped: 51316 kB' 'Shmem: 10488 kB' 'KReclaimable: 61628 kB' 'Slab: 137800 kB' 'SReclaimable: 61628 kB' 'SUnreclaim: 76172 kB' 'KernelStack: 6380 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 336612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.811 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.070 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.071 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:26.072 10:04:59 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:26.072 10:04:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.072 10:04:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.072 10:04:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:26.072 ************************************ 00:04:26.072 START TEST default_setup 00:04:26.072 ************************************ 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.072 10:04:59 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.901 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.901 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.901 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8112900 kB' 'MemAvailable: 9504972 kB' 'Buffers: 2436 kB' 'Cached: 1606388 kB' 'SwapCached: 0 kB' 'Active: 456872 kB' 'Inactive: 1273824 kB' 'Active(anon): 132352 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123756 kB' 'Mapped: 51436 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137392 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76060 kB' 'KernelStack: 6432 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.902 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8112900 kB' 'MemAvailable: 9504972 kB' 'Buffers: 2436 kB' 'Cached: 1606388 kB' 'SwapCached: 0 kB' 'Active: 456656 kB' 'Inactive: 1273824 kB' 'Active(anon): 132136 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123084 kB' 'Mapped: 51436 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137400 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76068 kB' 'KernelStack: 6384 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.903 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.904 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8113164 kB' 'MemAvailable: 9505240 kB' 'Buffers: 2436 kB' 'Cached: 1606388 kB' 'SwapCached: 0 kB' 'Active: 456424 kB' 'Inactive: 1273828 kB' 'Active(anon): 131904 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122844 kB' 'Mapped: 51332 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137404 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76072 kB' 'KernelStack: 6384 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.905 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.906 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:26.907 nr_hugepages=1024 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.907 resv_hugepages=0 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.907 surplus_hugepages=0 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.907 anon_hugepages=0 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8112912 kB' 'MemAvailable: 9504988 kB' 'Buffers: 2436 kB' 'Cached: 1606388 kB' 'SwapCached: 0 kB' 'Active: 456560 kB' 'Inactive: 1273828 kB' 'Active(anon): 132040 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122928 kB' 'Mapped: 51332 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137404 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76072 kB' 'KernelStack: 6352 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.907 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.908 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8113548 kB' 'MemUsed: 4128428 kB' 'SwapCached: 0 kB' 'Active: 456148 kB' 'Inactive: 1273828 kB' 'Active(anon): 131628 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1608824 kB' 'Mapped: 51332 kB' 'AnonPages: 122736 kB' 'Shmem: 10464 kB' 'KernelStack: 6372 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61332 kB' 'Slab: 137400 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.909 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.169 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:27.170 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.171 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:27.171 10:05:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:27.171 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.171 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.171 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.171 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.171 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:27.171 node0=1024 expecting 1024 00:04:27.171 10:05:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:27.171 00:04:27.171 real 0m1.052s 00:04:27.171 user 0m0.472s 00:04:27.171 sys 0m0.563s 00:04:27.171 10:05:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.171 10:05:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:27.171 ************************************ 00:04:27.171 END TEST default_setup 00:04:27.171 ************************************ 00:04:27.171 10:05:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:27.171 10:05:00 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:27.171 10:05:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.171 10:05:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.171 10:05:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.171 ************************************ 00:04:27.171 START TEST per_node_1G_alloc 00:04:27.171 ************************************ 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.171 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.430 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.430 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9161128 kB' 'MemAvailable: 10553216 kB' 'Buffers: 2436 kB' 'Cached: 1606388 kB' 'SwapCached: 0 kB' 'Active: 456932 kB' 'Inactive: 1273840 kB' 'Active(anon): 132412 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123536 kB' 'Mapped: 51472 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137436 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76104 kB' 'KernelStack: 6340 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.430 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.431 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.693 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9161128 kB' 'MemAvailable: 10553216 kB' 'Buffers: 2436 kB' 'Cached: 1606388 kB' 'SwapCached: 0 kB' 'Active: 456524 kB' 'Inactive: 1273840 kB' 'Active(anon): 132004 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123168 kB' 'Mapped: 51332 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137432 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76100 kB' 'KernelStack: 6368 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.694 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9161892 kB' 'MemAvailable: 10553976 kB' 'Buffers: 2436 kB' 'Cached: 1606384 kB' 'SwapCached: 0 kB' 'Active: 456304 kB' 'Inactive: 1273836 kB' 'Active(anon): 131784 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122920 kB' 'Mapped: 51332 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137424 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76092 kB' 'KernelStack: 6320 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.696 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.697 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:27.698 nr_hugepages=512 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:27.698 resv_hugepages=0 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.698 surplus_hugepages=0 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.698 anon_hugepages=0 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.698 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9161892 kB' 'MemAvailable: 10553980 kB' 'Buffers: 2436 kB' 'Cached: 1606388 kB' 'SwapCached: 0 kB' 'Active: 456640 kB' 'Inactive: 1273840 kB' 'Active(anon): 132120 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123264 kB' 'Mapped: 51332 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137424 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76092 kB' 'KernelStack: 6352 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.699 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.700 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9161892 kB' 'MemUsed: 3080084 kB' 'SwapCached: 0 kB' 'Active: 456520 kB' 'Inactive: 1273840 kB' 'Active(anon): 132000 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1608824 kB' 'Mapped: 51332 kB' 'AnonPages: 123140 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61332 kB' 'Slab: 137420 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.701 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:27.702 node0=512 expecting 512 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:27.702 00:04:27.702 real 0m0.628s 00:04:27.702 user 0m0.321s 00:04:27.702 sys 0m0.352s 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.702 10:05:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:27.702 ************************************ 00:04:27.702 END TEST per_node_1G_alloc 00:04:27.702 ************************************ 00:04:27.702 10:05:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:27.702 10:05:00 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:27.702 10:05:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.702 10:05:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.702 10:05:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.702 ************************************ 00:04:27.702 START TEST even_2G_alloc 00:04:27.702 ************************************ 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.702 10:05:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:28.274 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.274 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.274 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8112208 kB' 'MemAvailable: 9504300 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 456940 kB' 'Inactive: 1273844 kB' 'Active(anon): 132420 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123524 kB' 'Mapped: 51460 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137504 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76172 kB' 'KernelStack: 6368 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.274 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8112208 kB' 'MemAvailable: 9504300 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 456372 kB' 'Inactive: 1273844 kB' 'Active(anon): 131852 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122936 kB' 'Mapped: 51332 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137516 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76184 kB' 'KernelStack: 6368 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.275 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.276 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8112208 kB' 'MemAvailable: 9504300 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 456760 kB' 'Inactive: 1273844 kB' 'Active(anon): 132240 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123324 kB' 'Mapped: 51332 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137516 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76184 kB' 'KernelStack: 6336 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.277 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.278 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:28.279 nr_hugepages=1024 00:04:28.279 resv_hugepages=0 00:04:28.279 surplus_hugepages=0 00:04:28.279 anon_hugepages=0 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.279 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8112208 kB' 'MemAvailable: 9504300 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 456544 kB' 'Inactive: 1273844 kB' 'Active(anon): 132024 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123196 kB' 'Mapped: 51332 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137516 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76184 kB' 'KernelStack: 6368 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.280 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.281 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.540 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8112208 kB' 'MemUsed: 4129768 kB' 'SwapCached: 0 kB' 'Active: 456496 kB' 'Inactive: 1273844 kB' 'Active(anon): 131976 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1608828 kB' 'Mapped: 51332 kB' 'AnonPages: 123088 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61332 kB' 'Slab: 137516 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.541 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.542 node0=1024 expecting 1024 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:28.542 00:04:28.542 real 0m0.653s 00:04:28.542 user 0m0.276s 00:04:28.542 sys 0m0.361s 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.542 10:05:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:28.542 ************************************ 00:04:28.542 END TEST even_2G_alloc 00:04:28.542 ************************************ 00:04:28.542 10:05:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:28.542 10:05:01 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:28.542 10:05:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.542 10:05:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.542 10:05:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:28.542 ************************************ 00:04:28.542 START TEST odd_alloc 00:04:28.542 ************************************ 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.542 10:05:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:28.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.801 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.801 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.064 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8113724 kB' 'MemAvailable: 9505816 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 456628 kB' 'Inactive: 1273844 kB' 'Active(anon): 132108 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123204 kB' 'Mapped: 51456 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137492 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76160 kB' 'KernelStack: 6376 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.065 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8113724 kB' 'MemAvailable: 9505816 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 456632 kB' 'Inactive: 1273844 kB' 'Active(anon): 132112 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123252 kB' 'Mapped: 51456 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137488 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76156 kB' 'KernelStack: 6360 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.066 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.067 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8113472 kB' 'MemAvailable: 9505564 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 456360 kB' 'Inactive: 1273844 kB' 'Active(anon): 131840 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123264 kB' 'Mapped: 51332 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137488 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76156 kB' 'KernelStack: 6368 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.068 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.069 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:29.070 nr_hugepages=1025 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.070 resv_hugepages=0 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.070 surplus_hugepages=0 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.070 anon_hugepages=0 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8113472 kB' 'MemAvailable: 9505564 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 456512 kB' 'Inactive: 1273844 kB' 'Active(anon): 131992 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123120 kB' 'Mapped: 51332 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137488 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76156 kB' 'KernelStack: 6352 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.070 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.071 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8113472 kB' 'MemUsed: 4128504 kB' 'SwapCached: 0 kB' 'Active: 456288 kB' 'Inactive: 1273844 kB' 'Active(anon): 131768 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1608828 kB' 'Mapped: 51332 kB' 'AnonPages: 122920 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61332 kB' 'Slab: 137488 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.072 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.073 node0=1025 expecting 1025 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:29.073 00:04:29.073 real 0m0.649s 00:04:29.073 user 0m0.282s 00:04:29.073 sys 0m0.373s 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.073 ************************************ 00:04:29.073 END TEST odd_alloc 00:04:29.073 ************************************ 00:04:29.073 10:05:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.073 10:05:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:29.073 10:05:02 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:29.073 10:05:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.073 10:05:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.074 10:05:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.332 ************************************ 00:04:29.332 START TEST custom_alloc 00:04:29.332 ************************************ 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.332 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.592 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.592 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.592 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:29.592 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:29.592 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.592 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9161320 kB' 'MemAvailable: 10553408 kB' 'Buffers: 2436 kB' 'Cached: 1606388 kB' 'SwapCached: 0 kB' 'Active: 456772 kB' 'Inactive: 1273840 kB' 'Active(anon): 132252 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123372 kB' 'Mapped: 51496 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137528 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76196 kB' 'KernelStack: 6324 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.593 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9161320 kB' 'MemAvailable: 10553412 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 456316 kB' 'Inactive: 1273844 kB' 'Active(anon): 131796 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123196 kB' 'Mapped: 51248 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137544 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76212 kB' 'KernelStack: 6384 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.594 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.595 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.857 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.858 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9161616 kB' 'MemAvailable: 10553708 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 456248 kB' 'Inactive: 1273844 kB' 'Active(anon): 131728 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123136 kB' 'Mapped: 51332 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137532 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76200 kB' 'KernelStack: 6352 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.859 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.860 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:29.861 nr_hugepages=512 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.861 resv_hugepages=0 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.861 surplus_hugepages=0 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.861 anon_hugepages=0 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9161616 kB' 'MemAvailable: 10553708 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 456292 kB' 'Inactive: 1273844 kB' 'Active(anon): 131772 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123216 kB' 'Mapped: 51332 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137532 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76200 kB' 'KernelStack: 6384 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.861 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.862 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.863 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9161616 kB' 'MemUsed: 3080360 kB' 'SwapCached: 0 kB' 'Active: 456520 kB' 'Inactive: 1273844 kB' 'Active(anon): 132000 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1608828 kB' 'Mapped: 51332 kB' 'AnonPages: 123196 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61332 kB' 'Slab: 137528 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.864 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.866 node0=512 expecting 512 00:04:29.866 ************************************ 00:04:29.866 END TEST custom_alloc 00:04:29.866 ************************************ 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:29.866 00:04:29.866 real 0m0.670s 00:04:29.866 user 0m0.307s 00:04:29.866 sys 0m0.371s 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.866 10:05:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.866 10:05:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:29.866 10:05:03 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:29.866 10:05:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.866 10:05:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.866 10:05:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.866 ************************************ 00:04:29.866 START TEST no_shrink_alloc 00:04:29.866 ************************************ 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.866 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.439 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.439 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.439 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8110812 kB' 'MemAvailable: 9502904 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 456396 kB' 'Inactive: 1273844 kB' 'Active(anon): 131876 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123248 kB' 'Mapped: 51448 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137512 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76180 kB' 'KernelStack: 6392 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.439 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8111064 kB' 'MemAvailable: 9503156 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 456320 kB' 'Inactive: 1273844 kB' 'Active(anon): 131800 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123140 kB' 'Mapped: 51332 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137516 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76184 kB' 'KernelStack: 6352 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.440 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8111064 kB' 'MemAvailable: 9503156 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 456336 kB' 'Inactive: 1273844 kB' 'Active(anon): 131816 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123232 kB' 'Mapped: 51332 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137516 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76184 kB' 'KernelStack: 6368 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.441 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.442 nr_hugepages=1024 00:04:30.442 resv_hugepages=0 00:04:30.442 surplus_hugepages=0 00:04:30.442 anon_hugepages=0 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8111064 kB' 'MemAvailable: 9503156 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 456540 kB' 'Inactive: 1273844 kB' 'Active(anon): 132020 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122924 kB' 'Mapped: 51332 kB' 'Shmem: 10464 kB' 'KReclaimable: 61332 kB' 'Slab: 137508 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76176 kB' 'KernelStack: 6352 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8112448 kB' 'MemUsed: 4129528 kB' 'SwapCached: 0 kB' 'Active: 456284 kB' 'Inactive: 1273844 kB' 'Active(anon): 131764 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1608828 kB' 'Mapped: 51332 kB' 'AnonPages: 123188 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61332 kB' 'Slab: 137500 kB' 'SReclaimable: 61332 kB' 'SUnreclaim: 76168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.443 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.444 node0=1024 expecting 1024 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.444 10:05:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.012 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.012 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.012 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8115820 kB' 'MemAvailable: 9507908 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 452092 kB' 'Inactive: 1273844 kB' 'Active(anon): 127572 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 118868 kB' 'Mapped: 50776 kB' 'Shmem: 10464 kB' 'KReclaimable: 61328 kB' 'Slab: 137200 kB' 'SReclaimable: 61328 kB' 'SUnreclaim: 75872 kB' 'KernelStack: 6272 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.012 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.013 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8115820 kB' 'MemAvailable: 9507908 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 451708 kB' 'Inactive: 1273844 kB' 'Active(anon): 127188 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 118148 kB' 'Mapped: 50592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61328 kB' 'Slab: 137196 kB' 'SReclaimable: 61328 kB' 'SUnreclaim: 75868 kB' 'KernelStack: 6268 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.014 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.015 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8115568 kB' 'MemAvailable: 9507656 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 451708 kB' 'Inactive: 1273844 kB' 'Active(anon): 127188 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 118148 kB' 'Mapped: 50592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61328 kB' 'Slab: 137196 kB' 'SReclaimable: 61328 kB' 'SUnreclaim: 75868 kB' 'KernelStack: 6268 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.016 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.017 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:31.018 nr_hugepages=1024 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.018 resv_hugepages=0 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.018 surplus_hugepages=0 00:04:31.018 anon_hugepages=0 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.018 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:31.278 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.278 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.278 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.278 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.278 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.278 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.278 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.278 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.278 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.278 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.278 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8115568 kB' 'MemAvailable: 9507656 kB' 'Buffers: 2436 kB' 'Cached: 1606392 kB' 'SwapCached: 0 kB' 'Active: 451708 kB' 'Inactive: 1273844 kB' 'Active(anon): 127188 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 118148 kB' 'Mapped: 50592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61328 kB' 'Slab: 137196 kB' 'SReclaimable: 61328 kB' 'SUnreclaim: 75868 kB' 'KernelStack: 6268 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.279 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8115568 kB' 'MemUsed: 4126408 kB' 'SwapCached: 0 kB' 'Active: 451716 kB' 'Inactive: 1273844 kB' 'Active(anon): 127196 kB' 'Inactive(anon): 0 kB' 'Active(file): 324520 kB' 'Inactive(file): 1273844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1608828 kB' 'Mapped: 50592 kB' 'AnonPages: 118148 kB' 'Shmem: 10464 kB' 'KernelStack: 6268 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61328 kB' 'Slab: 137196 kB' 'SReclaimable: 61328 kB' 'SUnreclaim: 75868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.280 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.281 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.282 node0=1024 expecting 1024 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:31.282 00:04:31.282 real 0m1.285s 00:04:31.282 user 0m0.583s 00:04:31.282 sys 0m0.720s 00:04:31.282 ************************************ 00:04:31.282 END TEST no_shrink_alloc 00:04:31.282 ************************************ 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.282 10:05:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:31.282 10:05:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:31.282 10:05:04 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:31.282 10:05:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:31.282 10:05:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:31.282 10:05:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.282 10:05:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.282 10:05:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.282 10:05:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.282 10:05:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:31.282 10:05:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:31.282 00:04:31.282 real 0m5.428s 00:04:31.282 user 0m2.413s 00:04:31.282 sys 0m3.060s 00:04:31.282 10:05:04 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.282 10:05:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:31.282 ************************************ 00:04:31.282 END TEST hugepages 00:04:31.282 ************************************ 00:04:31.282 10:05:04 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:31.282 10:05:04 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:31.282 10:05:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.282 10:05:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.282 10:05:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:31.282 ************************************ 00:04:31.282 START TEST driver 00:04:31.282 ************************************ 00:04:31.282 10:05:04 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:31.540 * Looking for test storage... 00:04:31.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:31.540 10:05:04 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:31.540 10:05:04 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:31.540 10:05:04 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:32.106 10:05:05 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:32.107 10:05:05 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.107 10:05:05 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.107 10:05:05 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:32.107 ************************************ 00:04:32.107 START TEST guess_driver 00:04:32.107 ************************************ 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:32.107 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:32.107 Looking for driver=uio_pci_generic 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.107 10:05:05 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:33.043 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:33.043 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:33.043 10:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.043 10:05:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.043 10:05:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:33.043 10:05:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.043 10:05:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.043 10:05:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:33.043 10:05:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.043 10:05:06 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:33.043 10:05:06 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:33.043 10:05:06 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.044 10:05:06 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:33.993 00:04:33.993 real 0m1.667s 00:04:33.993 user 0m0.596s 00:04:33.993 sys 0m1.103s 00:04:33.993 10:05:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.993 10:05:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:33.993 ************************************ 00:04:33.993 END TEST guess_driver 00:04:33.993 ************************************ 00:04:33.993 10:05:06 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:33.993 ************************************ 00:04:33.993 END TEST driver 00:04:33.993 ************************************ 00:04:33.993 00:04:33.993 real 0m2.496s 00:04:33.993 user 0m0.890s 00:04:33.993 sys 0m1.716s 00:04:33.993 10:05:06 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.993 10:05:06 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:33.993 10:05:06 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:33.993 10:05:06 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:33.993 10:05:06 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.993 10:05:06 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.993 10:05:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:33.993 ************************************ 00:04:33.993 START TEST devices 00:04:33.993 ************************************ 00:04:33.993 10:05:06 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:33.993 * Looking for test storage... 00:04:33.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:33.993 10:05:07 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:33.993 10:05:07 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:33.993 10:05:07 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.993 10:05:07 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:34.929 10:05:07 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:34.929 10:05:07 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:34.929 10:05:07 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:34.929 10:05:07 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:34.929 10:05:07 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:34.930 10:05:07 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:34.930 10:05:07 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:34.930 10:05:07 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:34.930 No valid GPT data, bailing 00:04:34.930 10:05:07 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:34.930 10:05:07 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:34.930 10:05:07 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:34.930 10:05:07 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:34.930 10:05:07 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:34.930 10:05:07 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:34.930 10:05:07 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:34.930 10:05:07 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:34.930 10:05:07 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:34.930 No valid GPT data, bailing 00:04:34.930 10:05:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:34.930 10:05:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:34.930 10:05:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:34.930 10:05:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:34.930 10:05:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:34.930 10:05:08 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:34.930 10:05:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:34.930 10:05:08 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:34.930 No valid GPT data, bailing 00:04:34.930 10:05:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:34.930 10:05:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:34.930 10:05:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:34.930 10:05:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:34.930 10:05:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:34.930 10:05:08 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:34.930 10:05:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:34.930 10:05:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:34.930 10:05:08 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:34.930 No valid GPT data, bailing 00:04:34.930 10:05:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:35.190 10:05:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:35.190 10:05:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:35.190 10:05:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:35.190 10:05:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:35.190 10:05:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:35.190 10:05:08 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:35.190 10:05:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:35.190 10:05:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:35.190 10:05:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:35.190 10:05:08 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:35.190 10:05:08 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:35.190 10:05:08 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:35.190 10:05:08 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.190 10:05:08 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.190 10:05:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:35.190 ************************************ 00:04:35.190 START TEST nvme_mount 00:04:35.190 ************************************ 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:35.190 10:05:08 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:36.127 Creating new GPT entries in memory. 00:04:36.127 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:36.127 other utilities. 00:04:36.127 10:05:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:36.127 10:05:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.127 10:05:09 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:36.127 10:05:09 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:36.127 10:05:09 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:37.060 Creating new GPT entries in memory. 00:04:37.060 The operation has completed successfully. 00:04:37.060 10:05:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:37.060 10:05:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:37.061 10:05:10 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57100 00:04:37.061 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.061 10:05:10 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:37.061 10:05:10 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.061 10:05:10 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:37.061 10:05:10 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:37.320 10:05:10 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.320 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:37.320 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:37.320 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:37.320 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.320 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:37.320 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:37.320 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:37.320 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:37.320 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:37.320 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.320 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:37.320 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:37.320 10:05:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.320 10:05:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:37.577 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.577 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:37.577 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:37.577 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.577 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.577 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.577 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.577 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.577 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.577 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.836 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.836 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:37.836 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.836 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:37.836 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:37.836 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:37.836 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.836 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.836 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.836 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:37.836 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.836 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.836 10:05:10 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.094 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:38.094 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:38.094 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:38.094 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.094 10:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:38.351 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.351 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:38.351 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:38.351 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.351 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.351 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.609 10:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:38.867 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.867 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:38.867 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:38.867 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.867 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.867 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.134 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:39.134 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.134 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:39.134 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.411 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.411 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:39.411 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:39.411 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:39.411 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.411 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.411 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.411 10:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.411 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.411 00:04:39.411 real 0m4.230s 00:04:39.411 user 0m0.739s 00:04:39.411 sys 0m1.253s 00:04:39.411 10:05:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.411 10:05:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:39.411 ************************************ 00:04:39.411 END TEST nvme_mount 00:04:39.411 ************************************ 00:04:39.411 10:05:12 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:39.411 10:05:12 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:39.411 10:05:12 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.411 10:05:12 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.411 10:05:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:39.411 ************************************ 00:04:39.411 START TEST dm_mount 00:04:39.411 ************************************ 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:39.411 10:05:12 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:40.346 Creating new GPT entries in memory. 00:04:40.346 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:40.346 other utilities. 00:04:40.346 10:05:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:40.346 10:05:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.346 10:05:13 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.346 10:05:13 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.346 10:05:13 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:41.325 Creating new GPT entries in memory. 00:04:41.325 The operation has completed successfully. 00:04:41.325 10:05:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.325 10:05:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.325 10:05:14 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.325 10:05:14 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.325 10:05:14 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:42.703 The operation has completed successfully. 00:04:42.703 10:05:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.703 10:05:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57536 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.704 10:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.963 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.963 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.963 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.963 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.963 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.963 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:42.963 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:42.963 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:42.963 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:42.963 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:43.220 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.478 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:43.478 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.478 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:43.478 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.736 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.736 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:43.736 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:43.736 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:43.736 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:43.736 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:43.736 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:43.736 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.736 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:43.736 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:43.736 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:43.736 10:05:16 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:43.736 00:04:43.736 real 0m4.373s 00:04:43.736 user 0m0.486s 00:04:43.736 sys 0m0.820s 00:04:43.736 10:05:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.736 10:05:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:43.736 ************************************ 00:04:43.736 END TEST dm_mount 00:04:43.736 ************************************ 00:04:43.736 10:05:16 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:43.736 10:05:16 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:43.736 10:05:16 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:43.736 10:05:16 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.736 10:05:16 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.736 10:05:16 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:43.736 10:05:16 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.736 10:05:16 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:43.994 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:43.994 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:43.994 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:43.994 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:43.994 10:05:17 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:43.994 10:05:17 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:43.994 10:05:17 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:43.994 10:05:17 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.994 10:05:17 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:43.994 10:05:17 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.994 10:05:17 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:43.994 00:04:43.994 real 0m10.208s 00:04:43.994 user 0m1.875s 00:04:43.994 sys 0m2.754s 00:04:43.994 10:05:17 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.994 10:05:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:43.994 ************************************ 00:04:43.994 END TEST devices 00:04:43.994 ************************************ 00:04:43.994 10:05:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:43.994 00:04:43.994 real 0m23.678s 00:04:43.994 user 0m7.420s 00:04:43.994 sys 0m10.841s 00:04:43.994 10:05:17 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.994 10:05:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:43.994 ************************************ 00:04:43.994 END TEST setup.sh 00:04:43.994 ************************************ 00:04:44.252 10:05:17 -- common/autotest_common.sh@1142 -- # return 0 00:04:44.252 10:05:17 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:44.820 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.820 Hugepages 00:04:44.820 node hugesize free / total 00:04:44.820 node0 1048576kB 0 / 0 00:04:44.820 node0 2048kB 2048 / 2048 00:04:44.820 00:04:44.820 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:45.078 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:45.078 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:45.078 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:45.078 10:05:18 -- spdk/autotest.sh@130 -- # uname -s 00:04:45.078 10:05:18 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:45.078 10:05:18 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:45.078 10:05:18 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.012 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.012 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.012 10:05:19 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:46.944 10:05:20 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:46.944 10:05:20 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:46.944 10:05:20 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:46.944 10:05:20 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:46.944 10:05:20 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:46.944 10:05:20 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:46.944 10:05:20 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:46.944 10:05:20 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:47.201 10:05:20 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:47.201 10:05:20 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:47.201 10:05:20 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:47.201 10:05:20 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:47.460 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.460 Waiting for block devices as requested 00:04:47.718 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:47.718 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:47.718 10:05:20 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:47.718 10:05:20 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:47.718 10:05:20 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:47.719 10:05:20 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:47.719 10:05:20 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:47.719 10:05:20 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:47.719 10:05:20 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:47.719 10:05:20 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:47.719 10:05:20 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:47.719 10:05:20 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:47.719 10:05:20 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:47.719 10:05:20 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:47.719 10:05:20 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:47.719 10:05:20 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:47.719 10:05:20 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:47.719 10:05:20 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:47.719 10:05:20 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:47.719 10:05:20 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:47.719 10:05:20 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:47.719 10:05:20 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:47.719 10:05:20 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:47.719 10:05:20 -- common/autotest_common.sh@1557 -- # continue 00:04:47.719 10:05:20 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:47.719 10:05:20 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:47.719 10:05:20 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:47.719 10:05:20 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:47.719 10:05:20 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:47.719 10:05:20 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:47.719 10:05:20 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:47.978 10:05:20 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:47.978 10:05:20 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:47.978 10:05:20 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:47.978 10:05:20 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:47.978 10:05:20 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:47.978 10:05:20 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:47.978 10:05:20 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:47.978 10:05:20 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:47.978 10:05:20 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:47.978 10:05:20 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:47.978 10:05:20 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:47.978 10:05:20 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:47.978 10:05:20 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:47.978 10:05:20 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:47.978 10:05:20 -- common/autotest_common.sh@1557 -- # continue 00:04:47.978 10:05:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:47.978 10:05:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:47.978 10:05:20 -- common/autotest_common.sh@10 -- # set +x 00:04:47.978 10:05:21 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:47.978 10:05:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:47.978 10:05:21 -- common/autotest_common.sh@10 -- # set +x 00:04:47.978 10:05:21 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:48.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.802 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:48.802 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:48.802 10:05:21 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:48.802 10:05:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:48.802 10:05:21 -- common/autotest_common.sh@10 -- # set +x 00:04:48.802 10:05:22 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:48.802 10:05:22 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:48.802 10:05:22 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:48.802 10:05:22 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:48.802 10:05:22 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:48.802 10:05:22 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:48.802 10:05:22 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:48.802 10:05:22 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:48.802 10:05:22 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:48.802 10:05:22 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:48.802 10:05:22 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:49.060 10:05:22 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:49.060 10:05:22 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:49.060 10:05:22 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:49.060 10:05:22 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:49.060 10:05:22 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:49.060 10:05:22 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:49.060 10:05:22 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:49.060 10:05:22 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:49.060 10:05:22 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:49.060 10:05:22 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:49.060 10:05:22 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:49.060 10:05:22 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:49.060 10:05:22 -- common/autotest_common.sh@1593 -- # return 0 00:04:49.060 10:05:22 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:49.060 10:05:22 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:49.060 10:05:22 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:49.060 10:05:22 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:49.060 10:05:22 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:49.060 10:05:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.061 10:05:22 -- common/autotest_common.sh@10 -- # set +x 00:04:49.061 10:05:22 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:49.061 10:05:22 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:49.061 10:05:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.061 10:05:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.061 10:05:22 -- common/autotest_common.sh@10 -- # set +x 00:04:49.061 ************************************ 00:04:49.061 START TEST env 00:04:49.061 ************************************ 00:04:49.061 10:05:22 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:49.061 * Looking for test storage... 00:04:49.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:49.061 10:05:22 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:49.061 10:05:22 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.061 10:05:22 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.061 10:05:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.061 ************************************ 00:04:49.061 START TEST env_memory 00:04:49.061 ************************************ 00:04:49.061 10:05:22 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:49.061 00:04:49.061 00:04:49.061 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.061 http://cunit.sourceforge.net/ 00:04:49.061 00:04:49.061 00:04:49.061 Suite: memory 00:04:49.061 Test: alloc and free memory map ...[2024-07-25 10:05:22.298989] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:49.319 passed 00:04:49.319 Test: mem map translation ...[2024-07-25 10:05:22.342819] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:49.319 [2024-07-25 10:05:22.342881] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:49.319 [2024-07-25 10:05:22.342944] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:49.319 [2024-07-25 10:05:22.342957] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:49.319 passed 00:04:49.319 Test: mem map registration ...[2024-07-25 10:05:22.406824] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:49.319 [2024-07-25 10:05:22.406880] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:49.319 passed 00:04:49.319 Test: mem map adjacent registrations ...passed 00:04:49.319 00:04:49.319 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.319 suites 1 1 n/a 0 0 00:04:49.319 tests 4 4 4 0 0 00:04:49.319 asserts 152 152 152 0 n/a 00:04:49.319 00:04:49.319 Elapsed time = 0.238 seconds 00:04:49.319 00:04:49.319 real 0m0.250s 00:04:49.319 user 0m0.234s 00:04:49.319 sys 0m0.013s 00:04:49.319 10:05:22 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.319 10:05:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:49.319 ************************************ 00:04:49.319 END TEST env_memory 00:04:49.319 ************************************ 00:04:49.319 10:05:22 env -- common/autotest_common.sh@1142 -- # return 0 00:04:49.319 10:05:22 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:49.319 10:05:22 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.319 10:05:22 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.319 10:05:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.319 ************************************ 00:04:49.319 START TEST env_vtophys 00:04:49.319 ************************************ 00:04:49.319 10:05:22 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:49.319 EAL: lib.eal log level changed from notice to debug 00:04:49.319 EAL: Detected lcore 0 as core 0 on socket 0 00:04:49.319 EAL: Detected lcore 1 as core 0 on socket 0 00:04:49.319 EAL: Detected lcore 2 as core 0 on socket 0 00:04:49.319 EAL: Detected lcore 3 as core 0 on socket 0 00:04:49.319 EAL: Detected lcore 4 as core 0 on socket 0 00:04:49.319 EAL: Detected lcore 5 as core 0 on socket 0 00:04:49.319 EAL: Detected lcore 6 as core 0 on socket 0 00:04:49.319 EAL: Detected lcore 7 as core 0 on socket 0 00:04:49.319 EAL: Detected lcore 8 as core 0 on socket 0 00:04:49.319 EAL: Detected lcore 9 as core 0 on socket 0 00:04:49.577 EAL: Maximum logical cores by configuration: 128 00:04:49.577 EAL: Detected CPU lcores: 10 00:04:49.577 EAL: Detected NUMA nodes: 1 00:04:49.577 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:49.577 EAL: Detected shared linkage of DPDK 00:04:49.577 EAL: No shared files mode enabled, IPC will be disabled 00:04:49.577 EAL: Selected IOVA mode 'PA' 00:04:49.577 EAL: Probing VFIO support... 00:04:49.577 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:49.577 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:49.577 EAL: Ask a virtual area of 0x2e000 bytes 00:04:49.577 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:49.577 EAL: Setting up physically contiguous memory... 00:04:49.577 EAL: Setting maximum number of open files to 524288 00:04:49.577 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:49.577 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:49.577 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.577 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:49.577 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.577 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.577 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:49.577 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:49.577 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.577 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:49.577 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.577 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.577 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:49.577 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:49.577 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.577 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:49.577 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.577 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.577 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:49.577 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:49.577 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.577 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:49.577 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.577 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.577 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:49.577 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:49.577 EAL: Hugepages will be freed exactly as allocated. 00:04:49.577 EAL: No shared files mode enabled, IPC is disabled 00:04:49.577 EAL: No shared files mode enabled, IPC is disabled 00:04:49.578 EAL: TSC frequency is ~2100000 KHz 00:04:49.578 EAL: Main lcore 0 is ready (tid=7fe7a124ca00;cpuset=[0]) 00:04:49.578 EAL: Trying to obtain current memory policy. 00:04:49.578 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.578 EAL: Restoring previous memory policy: 0 00:04:49.578 EAL: request: mp_malloc_sync 00:04:49.578 EAL: No shared files mode enabled, IPC is disabled 00:04:49.578 EAL: Heap on socket 0 was expanded by 2MB 00:04:49.578 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:49.578 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:49.578 EAL: Mem event callback 'spdk:(nil)' registered 00:04:49.578 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:49.578 00:04:49.578 00:04:49.578 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.578 http://cunit.sourceforge.net/ 00:04:49.578 00:04:49.578 00:04:49.578 Suite: components_suite 00:04:49.578 Test: vtophys_malloc_test ...passed 00:04:49.578 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:49.578 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.578 EAL: Restoring previous memory policy: 4 00:04:49.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.578 EAL: request: mp_malloc_sync 00:04:49.578 EAL: No shared files mode enabled, IPC is disabled 00:04:49.578 EAL: Heap on socket 0 was expanded by 4MB 00:04:49.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.578 EAL: request: mp_malloc_sync 00:04:49.578 EAL: No shared files mode enabled, IPC is disabled 00:04:49.578 EAL: Heap on socket 0 was shrunk by 4MB 00:04:49.578 EAL: Trying to obtain current memory policy. 00:04:49.578 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.578 EAL: Restoring previous memory policy: 4 00:04:49.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.578 EAL: request: mp_malloc_sync 00:04:49.578 EAL: No shared files mode enabled, IPC is disabled 00:04:49.578 EAL: Heap on socket 0 was expanded by 6MB 00:04:49.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.578 EAL: request: mp_malloc_sync 00:04:49.578 EAL: No shared files mode enabled, IPC is disabled 00:04:49.578 EAL: Heap on socket 0 was shrunk by 6MB 00:04:49.578 EAL: Trying to obtain current memory policy. 00:04:49.578 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.578 EAL: Restoring previous memory policy: 4 00:04:49.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.578 EAL: request: mp_malloc_sync 00:04:49.578 EAL: No shared files mode enabled, IPC is disabled 00:04:49.578 EAL: Heap on socket 0 was expanded by 10MB 00:04:49.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.578 EAL: request: mp_malloc_sync 00:04:49.578 EAL: No shared files mode enabled, IPC is disabled 00:04:49.578 EAL: Heap on socket 0 was shrunk by 10MB 00:04:49.578 EAL: Trying to obtain current memory policy. 00:04:49.578 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.578 EAL: Restoring previous memory policy: 4 00:04:49.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.578 EAL: request: mp_malloc_sync 00:04:49.578 EAL: No shared files mode enabled, IPC is disabled 00:04:49.578 EAL: Heap on socket 0 was expanded by 18MB 00:04:49.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.578 EAL: request: mp_malloc_sync 00:04:49.578 EAL: No shared files mode enabled, IPC is disabled 00:04:49.578 EAL: Heap on socket 0 was shrunk by 18MB 00:04:49.578 EAL: Trying to obtain current memory policy. 00:04:49.578 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.578 EAL: Restoring previous memory policy: 4 00:04:49.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.578 EAL: request: mp_malloc_sync 00:04:49.578 EAL: No shared files mode enabled, IPC is disabled 00:04:49.578 EAL: Heap on socket 0 was expanded by 34MB 00:04:49.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.578 EAL: request: mp_malloc_sync 00:04:49.578 EAL: No shared files mode enabled, IPC is disabled 00:04:49.578 EAL: Heap on socket 0 was shrunk by 34MB 00:04:49.578 EAL: Trying to obtain current memory policy. 00:04:49.578 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.578 EAL: Restoring previous memory policy: 4 00:04:49.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.578 EAL: request: mp_malloc_sync 00:04:49.578 EAL: No shared files mode enabled, IPC is disabled 00:04:49.578 EAL: Heap on socket 0 was expanded by 66MB 00:04:49.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.578 EAL: request: mp_malloc_sync 00:04:49.578 EAL: No shared files mode enabled, IPC is disabled 00:04:49.578 EAL: Heap on socket 0 was shrunk by 66MB 00:04:49.578 EAL: Trying to obtain current memory policy. 00:04:49.578 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.578 EAL: Restoring previous memory policy: 4 00:04:49.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.578 EAL: request: mp_malloc_sync 00:04:49.578 EAL: No shared files mode enabled, IPC is disabled 00:04:49.578 EAL: Heap on socket 0 was expanded by 130MB 00:04:49.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.836 EAL: request: mp_malloc_sync 00:04:49.836 EAL: No shared files mode enabled, IPC is disabled 00:04:49.836 EAL: Heap on socket 0 was shrunk by 130MB 00:04:49.836 EAL: Trying to obtain current memory policy. 00:04:49.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.836 EAL: Restoring previous memory policy: 4 00:04:49.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.836 EAL: request: mp_malloc_sync 00:04:49.836 EAL: No shared files mode enabled, IPC is disabled 00:04:49.836 EAL: Heap on socket 0 was expanded by 258MB 00:04:49.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.836 EAL: request: mp_malloc_sync 00:04:49.836 EAL: No shared files mode enabled, IPC is disabled 00:04:49.836 EAL: Heap on socket 0 was shrunk by 258MB 00:04:49.836 EAL: Trying to obtain current memory policy. 00:04:49.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.095 EAL: Restoring previous memory policy: 4 00:04:50.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.095 EAL: request: mp_malloc_sync 00:04:50.095 EAL: No shared files mode enabled, IPC is disabled 00:04:50.095 EAL: Heap on socket 0 was expanded by 514MB 00:04:50.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.095 EAL: request: mp_malloc_sync 00:04:50.095 EAL: No shared files mode enabled, IPC is disabled 00:04:50.095 EAL: Heap on socket 0 was shrunk by 514MB 00:04:50.095 EAL: Trying to obtain current memory policy. 00:04:50.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.352 EAL: Restoring previous memory policy: 4 00:04:50.352 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.352 EAL: request: mp_malloc_sync 00:04:50.352 EAL: No shared files mode enabled, IPC is disabled 00:04:50.352 EAL: Heap on socket 0 was expanded by 1026MB 00:04:50.610 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.610 passed 00:04:50.610 00:04:50.610 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.610 suites 1 1 n/a 0 0 00:04:50.610 tests 2 2 2 0 0 00:04:50.610 asserts 5204 5204 5204 0 n/a 00:04:50.610 00:04:50.610 Elapsed time = 1.025 seconds 00:04:50.610 EAL: request: mp_malloc_sync 00:04:50.610 EAL: No shared files mode enabled, IPC is disabled 00:04:50.610 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:50.610 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.610 EAL: request: mp_malloc_sync 00:04:50.610 EAL: No shared files mode enabled, IPC is disabled 00:04:50.610 EAL: Heap on socket 0 was shrunk by 2MB 00:04:50.610 EAL: No shared files mode enabled, IPC is disabled 00:04:50.610 EAL: No shared files mode enabled, IPC is disabled 00:04:50.610 EAL: No shared files mode enabled, IPC is disabled 00:04:50.610 00:04:50.610 real 0m1.229s 00:04:50.610 user 0m0.649s 00:04:50.610 sys 0m0.447s 00:04:50.610 10:05:23 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.610 10:05:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:50.610 ************************************ 00:04:50.610 END TEST env_vtophys 00:04:50.610 ************************************ 00:04:50.610 10:05:23 env -- common/autotest_common.sh@1142 -- # return 0 00:04:50.610 10:05:23 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:50.610 10:05:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.610 10:05:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.610 10:05:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.610 ************************************ 00:04:50.610 START TEST env_pci 00:04:50.610 ************************************ 00:04:50.610 10:05:23 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:50.610 00:04:50.610 00:04:50.610 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.610 http://cunit.sourceforge.net/ 00:04:50.610 00:04:50.610 00:04:50.610 Suite: pci 00:04:50.610 Test: pci_hook ...[2024-07-25 10:05:23.858554] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58735 has claimed it 00:04:50.610 passed 00:04:50.610 00:04:50.610 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.610 suites 1 1 n/a 0 0 00:04:50.610 tests 1 1 1 0 0 00:04:50.610 asserts 25 25 25 0 n/a 00:04:50.610 00:04:50.610 Elapsed time = 0.002 seconds 00:04:50.610 EAL: Cannot find device (10000:00:01.0) 00:04:50.610 EAL: Failed to attach device on primary process 00:04:50.610 00:04:50.610 real 0m0.020s 00:04:50.610 user 0m0.008s 00:04:50.610 sys 0m0.012s 00:04:50.610 10:05:23 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.610 10:05:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:50.610 ************************************ 00:04:50.610 END TEST env_pci 00:04:50.610 ************************************ 00:04:50.868 10:05:23 env -- common/autotest_common.sh@1142 -- # return 0 00:04:50.868 10:05:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:50.868 10:05:23 env -- env/env.sh@15 -- # uname 00:04:50.868 10:05:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:50.868 10:05:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:50.868 10:05:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.868 10:05:23 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:50.868 10:05:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.868 10:05:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.868 ************************************ 00:04:50.868 START TEST env_dpdk_post_init 00:04:50.868 ************************************ 00:04:50.868 10:05:23 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.868 EAL: Detected CPU lcores: 10 00:04:50.868 EAL: Detected NUMA nodes: 1 00:04:50.868 EAL: Detected shared linkage of DPDK 00:04:50.868 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:50.868 EAL: Selected IOVA mode 'PA' 00:04:50.868 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:50.868 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:50.868 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:50.868 Starting DPDK initialization... 00:04:50.868 Starting SPDK post initialization... 00:04:50.868 SPDK NVMe probe 00:04:50.868 Attaching to 0000:00:10.0 00:04:50.868 Attaching to 0000:00:11.0 00:04:50.868 Attached to 0000:00:10.0 00:04:50.868 Attached to 0000:00:11.0 00:04:50.868 Cleaning up... 00:04:50.868 00:04:50.868 real 0m0.184s 00:04:50.868 user 0m0.043s 00:04:50.868 sys 0m0.042s 00:04:50.868 10:05:24 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.868 10:05:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.868 ************************************ 00:04:50.868 END TEST env_dpdk_post_init 00:04:50.868 ************************************ 00:04:51.126 10:05:24 env -- common/autotest_common.sh@1142 -- # return 0 00:04:51.126 10:05:24 env -- env/env.sh@26 -- # uname 00:04:51.126 10:05:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:51.126 10:05:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.126 10:05:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.126 10:05:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.126 10:05:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.126 ************************************ 00:04:51.126 START TEST env_mem_callbacks 00:04:51.126 ************************************ 00:04:51.126 10:05:24 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.126 EAL: Detected CPU lcores: 10 00:04:51.126 EAL: Detected NUMA nodes: 1 00:04:51.126 EAL: Detected shared linkage of DPDK 00:04:51.126 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.126 EAL: Selected IOVA mode 'PA' 00:04:51.126 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:51.126 00:04:51.126 00:04:51.126 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.126 http://cunit.sourceforge.net/ 00:04:51.126 00:04:51.126 00:04:51.126 Suite: memory 00:04:51.126 Test: test ... 00:04:51.126 register 0x200000200000 2097152 00:04:51.126 malloc 3145728 00:04:51.126 register 0x200000400000 4194304 00:04:51.126 buf 0x200000500000 len 3145728 PASSED 00:04:51.126 malloc 64 00:04:51.126 buf 0x2000004fff40 len 64 PASSED 00:04:51.126 malloc 4194304 00:04:51.126 register 0x200000800000 6291456 00:04:51.126 buf 0x200000a00000 len 4194304 PASSED 00:04:51.126 free 0x200000500000 3145728 00:04:51.126 free 0x2000004fff40 64 00:04:51.126 unregister 0x200000400000 4194304 PASSED 00:04:51.126 free 0x200000a00000 4194304 00:04:51.126 unregister 0x200000800000 6291456 PASSED 00:04:51.126 malloc 8388608 00:04:51.126 register 0x200000400000 10485760 00:04:51.126 buf 0x200000600000 len 8388608 PASSED 00:04:51.126 free 0x200000600000 8388608 00:04:51.126 unregister 0x200000400000 10485760 PASSED 00:04:51.126 passed 00:04:51.126 00:04:51.126 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.126 suites 1 1 n/a 0 0 00:04:51.126 tests 1 1 1 0 0 00:04:51.126 asserts 15 15 15 0 n/a 00:04:51.126 00:04:51.126 Elapsed time = 0.006 seconds 00:04:51.126 00:04:51.126 real 0m0.140s 00:04:51.126 user 0m0.017s 00:04:51.126 sys 0m0.021s 00:04:51.126 10:05:24 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.126 10:05:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:51.126 ************************************ 00:04:51.126 END TEST env_mem_callbacks 00:04:51.126 ************************************ 00:04:51.126 10:05:24 env -- common/autotest_common.sh@1142 -- # return 0 00:04:51.126 00:04:51.126 real 0m2.218s 00:04:51.126 user 0m1.090s 00:04:51.126 sys 0m0.795s 00:04:51.126 10:05:24 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.126 10:05:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.126 ************************************ 00:04:51.126 END TEST env 00:04:51.126 ************************************ 00:04:51.385 10:05:24 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.385 10:05:24 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:51.385 10:05:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.385 10:05:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.385 10:05:24 -- common/autotest_common.sh@10 -- # set +x 00:04:51.385 ************************************ 00:04:51.385 START TEST rpc 00:04:51.385 ************************************ 00:04:51.385 10:05:24 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:51.385 * Looking for test storage... 00:04:51.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:51.385 10:05:24 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58850 00:04:51.385 10:05:24 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.385 10:05:24 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58850 00:04:51.385 10:05:24 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:51.385 10:05:24 rpc -- common/autotest_common.sh@829 -- # '[' -z 58850 ']' 00:04:51.385 10:05:24 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.385 10:05:24 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.385 10:05:24 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.385 10:05:24 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.385 10:05:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.385 [2024-07-25 10:05:24.629480] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:04:51.385 [2024-07-25 10:05:24.629591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58850 ] 00:04:51.646 [2024-07-25 10:05:24.766182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.646 [2024-07-25 10:05:24.889169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:51.646 [2024-07-25 10:05:24.889250] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58850' to capture a snapshot of events at runtime. 00:04:51.646 [2024-07-25 10:05:24.889266] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:51.646 [2024-07-25 10:05:24.889279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:51.646 [2024-07-25 10:05:24.889291] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58850 for offline analysis/debug. 00:04:51.646 [2024-07-25 10:05:24.889329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.612 10:05:25 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.612 10:05:25 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:52.612 10:05:25 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:52.612 10:05:25 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:52.612 10:05:25 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:52.612 10:05:25 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:52.612 10:05:25 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.612 10:05:25 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.612 10:05:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.612 ************************************ 00:04:52.612 START TEST rpc_integrity 00:04:52.612 ************************************ 00:04:52.612 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:52.612 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:52.612 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.612 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.612 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.612 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:52.612 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:52.612 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:52.612 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:52.612 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.612 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.612 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.612 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:52.612 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:52.612 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.612 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.612 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.612 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:52.612 { 00:04:52.612 "name": "Malloc0", 00:04:52.612 "aliases": [ 00:04:52.612 "bc8946ef-82f0-47ff-bb7f-ec80f2ce3ec8" 00:04:52.612 ], 00:04:52.612 "product_name": "Malloc disk", 00:04:52.612 "block_size": 512, 00:04:52.612 "num_blocks": 16384, 00:04:52.612 "uuid": "bc8946ef-82f0-47ff-bb7f-ec80f2ce3ec8", 00:04:52.612 "assigned_rate_limits": { 00:04:52.612 "rw_ios_per_sec": 0, 00:04:52.612 "rw_mbytes_per_sec": 0, 00:04:52.612 "r_mbytes_per_sec": 0, 00:04:52.612 "w_mbytes_per_sec": 0 00:04:52.612 }, 00:04:52.612 "claimed": false, 00:04:52.612 "zoned": false, 00:04:52.612 "supported_io_types": { 00:04:52.612 "read": true, 00:04:52.612 "write": true, 00:04:52.612 "unmap": true, 00:04:52.612 "flush": true, 00:04:52.612 "reset": true, 00:04:52.612 "nvme_admin": false, 00:04:52.612 "nvme_io": false, 00:04:52.612 "nvme_io_md": false, 00:04:52.612 "write_zeroes": true, 00:04:52.612 "zcopy": true, 00:04:52.612 "get_zone_info": false, 00:04:52.612 "zone_management": false, 00:04:52.612 "zone_append": false, 00:04:52.612 "compare": false, 00:04:52.612 "compare_and_write": false, 00:04:52.612 "abort": true, 00:04:52.612 "seek_hole": false, 00:04:52.612 "seek_data": false, 00:04:52.612 "copy": true, 00:04:52.612 "nvme_iov_md": false 00:04:52.612 }, 00:04:52.612 "memory_domains": [ 00:04:52.612 { 00:04:52.612 "dma_device_id": "system", 00:04:52.612 "dma_device_type": 1 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.612 "dma_device_type": 2 00:04:52.612 } 00:04:52.612 ], 00:04:52.612 "driver_specific": {} 00:04:52.612 } 00:04:52.612 ]' 00:04:52.612 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:52.612 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:52.612 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:52.612 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.612 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.612 [2024-07-25 10:05:25.705650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:52.612 [2024-07-25 10:05:25.705704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:52.612 [2024-07-25 10:05:25.705721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xda9430 00:04:52.612 [2024-07-25 10:05:25.705731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:52.612 [2024-07-25 10:05:25.707309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:52.612 [2024-07-25 10:05:25.707350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:52.612 Passthru0 00:04:52.612 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.612 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:52.612 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.613 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.613 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.613 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:52.613 { 00:04:52.613 "name": "Malloc0", 00:04:52.613 "aliases": [ 00:04:52.613 "bc8946ef-82f0-47ff-bb7f-ec80f2ce3ec8" 00:04:52.613 ], 00:04:52.613 "product_name": "Malloc disk", 00:04:52.613 "block_size": 512, 00:04:52.613 "num_blocks": 16384, 00:04:52.613 "uuid": "bc8946ef-82f0-47ff-bb7f-ec80f2ce3ec8", 00:04:52.613 "assigned_rate_limits": { 00:04:52.613 "rw_ios_per_sec": 0, 00:04:52.613 "rw_mbytes_per_sec": 0, 00:04:52.613 "r_mbytes_per_sec": 0, 00:04:52.613 "w_mbytes_per_sec": 0 00:04:52.613 }, 00:04:52.613 "claimed": true, 00:04:52.613 "claim_type": "exclusive_write", 00:04:52.613 "zoned": false, 00:04:52.613 "supported_io_types": { 00:04:52.613 "read": true, 00:04:52.613 "write": true, 00:04:52.613 "unmap": true, 00:04:52.613 "flush": true, 00:04:52.613 "reset": true, 00:04:52.613 "nvme_admin": false, 00:04:52.613 "nvme_io": false, 00:04:52.613 "nvme_io_md": false, 00:04:52.613 "write_zeroes": true, 00:04:52.613 "zcopy": true, 00:04:52.613 "get_zone_info": false, 00:04:52.613 "zone_management": false, 00:04:52.613 "zone_append": false, 00:04:52.613 "compare": false, 00:04:52.613 "compare_and_write": false, 00:04:52.613 "abort": true, 00:04:52.613 "seek_hole": false, 00:04:52.613 "seek_data": false, 00:04:52.613 "copy": true, 00:04:52.613 "nvme_iov_md": false 00:04:52.613 }, 00:04:52.613 "memory_domains": [ 00:04:52.613 { 00:04:52.613 "dma_device_id": "system", 00:04:52.613 "dma_device_type": 1 00:04:52.613 }, 00:04:52.613 { 00:04:52.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.613 "dma_device_type": 2 00:04:52.613 } 00:04:52.613 ], 00:04:52.613 "driver_specific": {} 00:04:52.613 }, 00:04:52.613 { 00:04:52.613 "name": "Passthru0", 00:04:52.613 "aliases": [ 00:04:52.613 "1384f5c9-86cf-51c0-8206-446eefe1e4cb" 00:04:52.613 ], 00:04:52.613 "product_name": "passthru", 00:04:52.613 "block_size": 512, 00:04:52.613 "num_blocks": 16384, 00:04:52.613 "uuid": "1384f5c9-86cf-51c0-8206-446eefe1e4cb", 00:04:52.613 "assigned_rate_limits": { 00:04:52.613 "rw_ios_per_sec": 0, 00:04:52.613 "rw_mbytes_per_sec": 0, 00:04:52.613 "r_mbytes_per_sec": 0, 00:04:52.613 "w_mbytes_per_sec": 0 00:04:52.613 }, 00:04:52.613 "claimed": false, 00:04:52.613 "zoned": false, 00:04:52.613 "supported_io_types": { 00:04:52.613 "read": true, 00:04:52.613 "write": true, 00:04:52.613 "unmap": true, 00:04:52.613 "flush": true, 00:04:52.613 "reset": true, 00:04:52.613 "nvme_admin": false, 00:04:52.613 "nvme_io": false, 00:04:52.613 "nvme_io_md": false, 00:04:52.613 "write_zeroes": true, 00:04:52.613 "zcopy": true, 00:04:52.613 "get_zone_info": false, 00:04:52.613 "zone_management": false, 00:04:52.613 "zone_append": false, 00:04:52.613 "compare": false, 00:04:52.613 "compare_and_write": false, 00:04:52.613 "abort": true, 00:04:52.613 "seek_hole": false, 00:04:52.613 "seek_data": false, 00:04:52.613 "copy": true, 00:04:52.613 "nvme_iov_md": false 00:04:52.613 }, 00:04:52.613 "memory_domains": [ 00:04:52.613 { 00:04:52.613 "dma_device_id": "system", 00:04:52.613 "dma_device_type": 1 00:04:52.613 }, 00:04:52.613 { 00:04:52.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.613 "dma_device_type": 2 00:04:52.613 } 00:04:52.613 ], 00:04:52.613 "driver_specific": { 00:04:52.613 "passthru": { 00:04:52.613 "name": "Passthru0", 00:04:52.613 "base_bdev_name": "Malloc0" 00:04:52.613 } 00:04:52.613 } 00:04:52.613 } 00:04:52.613 ]' 00:04:52.613 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:52.613 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:52.613 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:52.613 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.613 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.613 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.613 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:52.613 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.613 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.613 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.613 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:52.613 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.613 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.613 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.613 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:52.613 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:52.613 10:05:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:52.613 00:04:52.613 real 0m0.290s 00:04:52.613 user 0m0.170s 00:04:52.613 sys 0m0.060s 00:04:52.613 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.613 ************************************ 00:04:52.613 END TEST rpc_integrity 00:04:52.613 ************************************ 00:04:52.613 10:05:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.873 10:05:25 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:52.873 10:05:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:52.873 10:05:25 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.873 10:05:25 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.873 10:05:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.873 ************************************ 00:04:52.873 START TEST rpc_plugins 00:04:52.873 ************************************ 00:04:52.873 10:05:25 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:52.873 10:05:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:52.873 10:05:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.873 10:05:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.873 10:05:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.873 10:05:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:52.873 10:05:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:52.873 10:05:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.873 10:05:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.873 10:05:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.873 10:05:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:52.873 { 00:04:52.873 "name": "Malloc1", 00:04:52.873 "aliases": [ 00:04:52.873 "8ffca550-15e0-4540-87ab-9c56bc1ccb58" 00:04:52.873 ], 00:04:52.873 "product_name": "Malloc disk", 00:04:52.873 "block_size": 4096, 00:04:52.873 "num_blocks": 256, 00:04:52.873 "uuid": "8ffca550-15e0-4540-87ab-9c56bc1ccb58", 00:04:52.873 "assigned_rate_limits": { 00:04:52.873 "rw_ios_per_sec": 0, 00:04:52.873 "rw_mbytes_per_sec": 0, 00:04:52.873 "r_mbytes_per_sec": 0, 00:04:52.873 "w_mbytes_per_sec": 0 00:04:52.873 }, 00:04:52.873 "claimed": false, 00:04:52.873 "zoned": false, 00:04:52.873 "supported_io_types": { 00:04:52.873 "read": true, 00:04:52.873 "write": true, 00:04:52.873 "unmap": true, 00:04:52.873 "flush": true, 00:04:52.873 "reset": true, 00:04:52.873 "nvme_admin": false, 00:04:52.873 "nvme_io": false, 00:04:52.873 "nvme_io_md": false, 00:04:52.873 "write_zeroes": true, 00:04:52.873 "zcopy": true, 00:04:52.873 "get_zone_info": false, 00:04:52.873 "zone_management": false, 00:04:52.873 "zone_append": false, 00:04:52.873 "compare": false, 00:04:52.873 "compare_and_write": false, 00:04:52.873 "abort": true, 00:04:52.873 "seek_hole": false, 00:04:52.873 "seek_data": false, 00:04:52.873 "copy": true, 00:04:52.873 "nvme_iov_md": false 00:04:52.873 }, 00:04:52.873 "memory_domains": [ 00:04:52.873 { 00:04:52.873 "dma_device_id": "system", 00:04:52.873 "dma_device_type": 1 00:04:52.873 }, 00:04:52.873 { 00:04:52.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.873 "dma_device_type": 2 00:04:52.873 } 00:04:52.873 ], 00:04:52.873 "driver_specific": {} 00:04:52.873 } 00:04:52.873 ]' 00:04:52.873 10:05:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:52.873 10:05:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:52.873 10:05:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:52.873 10:05:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.873 10:05:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.873 10:05:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.873 10:05:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:52.873 10:05:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.873 10:05:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.873 10:05:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.873 10:05:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:52.873 10:05:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:52.873 10:05:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:52.873 00:04:52.873 real 0m0.148s 00:04:52.873 user 0m0.088s 00:04:52.873 sys 0m0.026s 00:04:52.873 10:05:26 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.873 10:05:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.873 ************************************ 00:04:52.873 END TEST rpc_plugins 00:04:52.873 ************************************ 00:04:52.873 10:05:26 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:52.873 10:05:26 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:52.873 10:05:26 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.873 10:05:26 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.873 10:05:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.873 ************************************ 00:04:52.873 START TEST rpc_trace_cmd_test 00:04:52.873 ************************************ 00:04:52.873 10:05:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:52.873 10:05:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:52.873 10:05:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:52.873 10:05:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.873 10:05:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:52.873 10:05:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.873 10:05:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:52.873 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58850", 00:04:52.873 "tpoint_group_mask": "0x8", 00:04:52.873 "iscsi_conn": { 00:04:52.873 "mask": "0x2", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "scsi": { 00:04:52.873 "mask": "0x4", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "bdev": { 00:04:52.873 "mask": "0x8", 00:04:52.873 "tpoint_mask": "0xffffffffffffffff" 00:04:52.873 }, 00:04:52.873 "nvmf_rdma": { 00:04:52.873 "mask": "0x10", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "nvmf_tcp": { 00:04:52.873 "mask": "0x20", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "ftl": { 00:04:52.873 "mask": "0x40", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "blobfs": { 00:04:52.873 "mask": "0x80", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "dsa": { 00:04:52.873 "mask": "0x200", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "thread": { 00:04:52.873 "mask": "0x400", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "nvme_pcie": { 00:04:52.873 "mask": "0x800", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "iaa": { 00:04:52.873 "mask": "0x1000", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "nvme_tcp": { 00:04:52.873 "mask": "0x2000", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "bdev_nvme": { 00:04:52.873 "mask": "0x4000", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "sock": { 00:04:52.873 "mask": "0x8000", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 } 00:04:52.873 }' 00:04:53.132 10:05:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:53.132 10:05:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:53.132 10:05:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:53.132 10:05:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:53.132 10:05:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:53.132 10:05:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:53.132 10:05:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:53.132 10:05:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:53.132 10:05:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:53.132 10:05:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:53.132 00:04:53.132 real 0m0.235s 00:04:53.132 user 0m0.196s 00:04:53.132 sys 0m0.030s 00:04:53.132 10:05:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.132 10:05:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:53.132 ************************************ 00:04:53.132 END TEST rpc_trace_cmd_test 00:04:53.132 ************************************ 00:04:53.391 10:05:26 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:53.391 10:05:26 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:53.391 10:05:26 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:53.391 10:05:26 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:53.391 10:05:26 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.391 10:05:26 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.391 10:05:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.391 ************************************ 00:04:53.391 START TEST rpc_daemon_integrity 00:04:53.391 ************************************ 00:04:53.391 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:53.391 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:53.391 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.391 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.391 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.391 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:53.391 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:53.391 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:53.391 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:53.391 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.391 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.391 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:53.392 { 00:04:53.392 "name": "Malloc2", 00:04:53.392 "aliases": [ 00:04:53.392 "2b248b49-c828-4c5d-a3a4-c1e71bc4c1ed" 00:04:53.392 ], 00:04:53.392 "product_name": "Malloc disk", 00:04:53.392 "block_size": 512, 00:04:53.392 "num_blocks": 16384, 00:04:53.392 "uuid": "2b248b49-c828-4c5d-a3a4-c1e71bc4c1ed", 00:04:53.392 "assigned_rate_limits": { 00:04:53.392 "rw_ios_per_sec": 0, 00:04:53.392 "rw_mbytes_per_sec": 0, 00:04:53.392 "r_mbytes_per_sec": 0, 00:04:53.392 "w_mbytes_per_sec": 0 00:04:53.392 }, 00:04:53.392 "claimed": false, 00:04:53.392 "zoned": false, 00:04:53.392 "supported_io_types": { 00:04:53.392 "read": true, 00:04:53.392 "write": true, 00:04:53.392 "unmap": true, 00:04:53.392 "flush": true, 00:04:53.392 "reset": true, 00:04:53.392 "nvme_admin": false, 00:04:53.392 "nvme_io": false, 00:04:53.392 "nvme_io_md": false, 00:04:53.392 "write_zeroes": true, 00:04:53.392 "zcopy": true, 00:04:53.392 "get_zone_info": false, 00:04:53.392 "zone_management": false, 00:04:53.392 "zone_append": false, 00:04:53.392 "compare": false, 00:04:53.392 "compare_and_write": false, 00:04:53.392 "abort": true, 00:04:53.392 "seek_hole": false, 00:04:53.392 "seek_data": false, 00:04:53.392 "copy": true, 00:04:53.392 "nvme_iov_md": false 00:04:53.392 }, 00:04:53.392 "memory_domains": [ 00:04:53.392 { 00:04:53.392 "dma_device_id": "system", 00:04:53.392 "dma_device_type": 1 00:04:53.392 }, 00:04:53.392 { 00:04:53.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.392 "dma_device_type": 2 00:04:53.392 } 00:04:53.392 ], 00:04:53.392 "driver_specific": {} 00:04:53.392 } 00:04:53.392 ]' 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.392 [2024-07-25 10:05:26.537902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:53.392 [2024-07-25 10:05:26.537953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:53.392 [2024-07-25 10:05:26.537991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdaa370 00:04:53.392 [2024-07-25 10:05:26.538002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:53.392 [2024-07-25 10:05:26.539391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:53.392 [2024-07-25 10:05:26.539461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:53.392 Passthru0 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.392 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:53.392 { 00:04:53.392 "name": "Malloc2", 00:04:53.392 "aliases": [ 00:04:53.392 "2b248b49-c828-4c5d-a3a4-c1e71bc4c1ed" 00:04:53.392 ], 00:04:53.392 "product_name": "Malloc disk", 00:04:53.392 "block_size": 512, 00:04:53.392 "num_blocks": 16384, 00:04:53.392 "uuid": "2b248b49-c828-4c5d-a3a4-c1e71bc4c1ed", 00:04:53.392 "assigned_rate_limits": { 00:04:53.392 "rw_ios_per_sec": 0, 00:04:53.392 "rw_mbytes_per_sec": 0, 00:04:53.392 "r_mbytes_per_sec": 0, 00:04:53.392 "w_mbytes_per_sec": 0 00:04:53.392 }, 00:04:53.392 "claimed": true, 00:04:53.392 "claim_type": "exclusive_write", 00:04:53.392 "zoned": false, 00:04:53.392 "supported_io_types": { 00:04:53.392 "read": true, 00:04:53.392 "write": true, 00:04:53.392 "unmap": true, 00:04:53.392 "flush": true, 00:04:53.392 "reset": true, 00:04:53.392 "nvme_admin": false, 00:04:53.392 "nvme_io": false, 00:04:53.392 "nvme_io_md": false, 00:04:53.392 "write_zeroes": true, 00:04:53.392 "zcopy": true, 00:04:53.392 "get_zone_info": false, 00:04:53.392 "zone_management": false, 00:04:53.392 "zone_append": false, 00:04:53.392 "compare": false, 00:04:53.392 "compare_and_write": false, 00:04:53.392 "abort": true, 00:04:53.392 "seek_hole": false, 00:04:53.392 "seek_data": false, 00:04:53.392 "copy": true, 00:04:53.392 "nvme_iov_md": false 00:04:53.392 }, 00:04:53.392 "memory_domains": [ 00:04:53.392 { 00:04:53.392 "dma_device_id": "system", 00:04:53.392 "dma_device_type": 1 00:04:53.392 }, 00:04:53.392 { 00:04:53.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.392 "dma_device_type": 2 00:04:53.392 } 00:04:53.392 ], 00:04:53.392 "driver_specific": {} 00:04:53.392 }, 00:04:53.392 { 00:04:53.392 "name": "Passthru0", 00:04:53.392 "aliases": [ 00:04:53.392 "8da81a52-7f8b-57fe-b892-086410cf86bc" 00:04:53.392 ], 00:04:53.392 "product_name": "passthru", 00:04:53.392 "block_size": 512, 00:04:53.392 "num_blocks": 16384, 00:04:53.392 "uuid": "8da81a52-7f8b-57fe-b892-086410cf86bc", 00:04:53.392 "assigned_rate_limits": { 00:04:53.392 "rw_ios_per_sec": 0, 00:04:53.392 "rw_mbytes_per_sec": 0, 00:04:53.392 "r_mbytes_per_sec": 0, 00:04:53.392 "w_mbytes_per_sec": 0 00:04:53.392 }, 00:04:53.392 "claimed": false, 00:04:53.392 "zoned": false, 00:04:53.392 "supported_io_types": { 00:04:53.392 "read": true, 00:04:53.392 "write": true, 00:04:53.392 "unmap": true, 00:04:53.392 "flush": true, 00:04:53.392 "reset": true, 00:04:53.392 "nvme_admin": false, 00:04:53.392 "nvme_io": false, 00:04:53.392 "nvme_io_md": false, 00:04:53.392 "write_zeroes": true, 00:04:53.392 "zcopy": true, 00:04:53.392 "get_zone_info": false, 00:04:53.392 "zone_management": false, 00:04:53.393 "zone_append": false, 00:04:53.393 "compare": false, 00:04:53.393 "compare_and_write": false, 00:04:53.393 "abort": true, 00:04:53.393 "seek_hole": false, 00:04:53.393 "seek_data": false, 00:04:53.393 "copy": true, 00:04:53.393 "nvme_iov_md": false 00:04:53.393 }, 00:04:53.393 "memory_domains": [ 00:04:53.393 { 00:04:53.393 "dma_device_id": "system", 00:04:53.393 "dma_device_type": 1 00:04:53.393 }, 00:04:53.393 { 00:04:53.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.393 "dma_device_type": 2 00:04:53.393 } 00:04:53.393 ], 00:04:53.393 "driver_specific": { 00:04:53.393 "passthru": { 00:04:53.393 "name": "Passthru0", 00:04:53.393 "base_bdev_name": "Malloc2" 00:04:53.393 } 00:04:53.393 } 00:04:53.393 } 00:04:53.393 ]' 00:04:53.393 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:53.393 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:53.393 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:53.393 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.393 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.393 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.393 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:53.393 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.393 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.393 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.393 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:53.393 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.393 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.651 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.651 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:53.651 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:53.651 10:05:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:53.651 00:04:53.651 real 0m0.299s 00:04:53.651 user 0m0.188s 00:04:53.651 sys 0m0.047s 00:04:53.651 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.651 ************************************ 00:04:53.651 END TEST rpc_daemon_integrity 00:04:53.651 ************************************ 00:04:53.651 10:05:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.651 10:05:26 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:53.651 10:05:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:53.651 10:05:26 rpc -- rpc/rpc.sh@84 -- # killprocess 58850 00:04:53.651 10:05:26 rpc -- common/autotest_common.sh@948 -- # '[' -z 58850 ']' 00:04:53.651 10:05:26 rpc -- common/autotest_common.sh@952 -- # kill -0 58850 00:04:53.651 10:05:26 rpc -- common/autotest_common.sh@953 -- # uname 00:04:53.651 10:05:26 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:53.651 10:05:26 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58850 00:04:53.651 10:05:26 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:53.651 killing process with pid 58850 00:04:53.651 10:05:26 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:53.651 10:05:26 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58850' 00:04:53.651 10:05:26 rpc -- common/autotest_common.sh@967 -- # kill 58850 00:04:53.651 10:05:26 rpc -- common/autotest_common.sh@972 -- # wait 58850 00:04:53.909 00:04:53.909 real 0m2.702s 00:04:53.909 user 0m3.405s 00:04:53.909 sys 0m0.739s 00:04:53.909 10:05:27 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.909 10:05:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.909 ************************************ 00:04:53.909 END TEST rpc 00:04:53.909 ************************************ 00:04:53.909 10:05:27 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.909 10:05:27 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:53.909 10:05:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.909 10:05:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.909 10:05:27 -- common/autotest_common.sh@10 -- # set +x 00:04:54.166 ************************************ 00:04:54.166 START TEST skip_rpc 00:04:54.166 ************************************ 00:04:54.166 10:05:27 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:54.166 * Looking for test storage... 00:04:54.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:54.166 10:05:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:54.166 10:05:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:54.166 10:05:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:54.166 10:05:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.166 10:05:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.166 10:05:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.166 ************************************ 00:04:54.166 START TEST skip_rpc 00:04:54.166 ************************************ 00:04:54.166 10:05:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:54.166 10:05:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59042 00:04:54.166 10:05:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.166 10:05:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:54.166 10:05:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:54.167 [2024-07-25 10:05:27.369801] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:04:54.167 [2024-07-25 10:05:27.369912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59042 ] 00:04:54.424 [2024-07-25 10:05:27.515861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.424 [2024-07-25 10:05:27.633615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59042 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 59042 ']' 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 59042 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59042 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:59.713 killing process with pid 59042 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59042' 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 59042 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 59042 00:04:59.713 00:04:59.713 real 0m5.369s 00:04:59.713 user 0m5.013s 00:04:59.713 sys 0m0.264s 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.713 10:05:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.713 ************************************ 00:04:59.713 END TEST skip_rpc 00:04:59.713 ************************************ 00:04:59.713 10:05:32 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:59.713 10:05:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:59.713 10:05:32 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.713 10:05:32 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.713 10:05:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.713 ************************************ 00:04:59.713 START TEST skip_rpc_with_json 00:04:59.713 ************************************ 00:04:59.713 10:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:59.713 10:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:59.713 10:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59129 00:04:59.713 10:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.713 10:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59129 00:04:59.713 10:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.713 10:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59129 ']' 00:04:59.713 10:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.713 10:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.713 10:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.713 10:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.713 10:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.713 [2024-07-25 10:05:32.792608] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:04:59.713 [2024-07-25 10:05:32.792712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59129 ] 00:04:59.713 [2024-07-25 10:05:32.935266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.972 [2024-07-25 10:05:33.028585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.536 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.536 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:00.536 10:05:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:00.536 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.536 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.536 [2024-07-25 10:05:33.728923] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:00.536 request: 00:05:00.536 { 00:05:00.536 "trtype": "tcp", 00:05:00.536 "method": "nvmf_get_transports", 00:05:00.536 "req_id": 1 00:05:00.536 } 00:05:00.536 Got JSON-RPC error response 00:05:00.536 response: 00:05:00.536 { 00:05:00.536 "code": -19, 00:05:00.536 "message": "No such device" 00:05:00.536 } 00:05:00.536 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:00.536 10:05:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:00.536 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.536 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.536 [2024-07-25 10:05:33.741018] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:00.536 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.536 10:05:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:00.536 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.536 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.795 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.795 10:05:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.795 { 00:05:00.795 "subsystems": [ 00:05:00.795 { 00:05:00.795 "subsystem": "keyring", 00:05:00.795 "config": [] 00:05:00.795 }, 00:05:00.795 { 00:05:00.795 "subsystem": "iobuf", 00:05:00.795 "config": [ 00:05:00.795 { 00:05:00.795 "method": "iobuf_set_options", 00:05:00.795 "params": { 00:05:00.795 "small_pool_count": 8192, 00:05:00.795 "large_pool_count": 1024, 00:05:00.795 "small_bufsize": 8192, 00:05:00.795 "large_bufsize": 135168 00:05:00.795 } 00:05:00.795 } 00:05:00.795 ] 00:05:00.795 }, 00:05:00.795 { 00:05:00.795 "subsystem": "sock", 00:05:00.795 "config": [ 00:05:00.795 { 00:05:00.795 "method": "sock_set_default_impl", 00:05:00.795 "params": { 00:05:00.795 "impl_name": "posix" 00:05:00.795 } 00:05:00.795 }, 00:05:00.795 { 00:05:00.795 "method": "sock_impl_set_options", 00:05:00.795 "params": { 00:05:00.795 "impl_name": "ssl", 00:05:00.795 "recv_buf_size": 4096, 00:05:00.795 "send_buf_size": 4096, 00:05:00.795 "enable_recv_pipe": true, 00:05:00.795 "enable_quickack": false, 00:05:00.795 "enable_placement_id": 0, 00:05:00.795 "enable_zerocopy_send_server": true, 00:05:00.795 "enable_zerocopy_send_client": false, 00:05:00.795 "zerocopy_threshold": 0, 00:05:00.795 "tls_version": 0, 00:05:00.795 "enable_ktls": false 00:05:00.795 } 00:05:00.795 }, 00:05:00.795 { 00:05:00.795 "method": "sock_impl_set_options", 00:05:00.795 "params": { 00:05:00.795 "impl_name": "posix", 00:05:00.795 "recv_buf_size": 2097152, 00:05:00.795 "send_buf_size": 2097152, 00:05:00.795 "enable_recv_pipe": true, 00:05:00.795 "enable_quickack": false, 00:05:00.795 "enable_placement_id": 0, 00:05:00.795 "enable_zerocopy_send_server": true, 00:05:00.795 "enable_zerocopy_send_client": false, 00:05:00.795 "zerocopy_threshold": 0, 00:05:00.795 "tls_version": 0, 00:05:00.795 "enable_ktls": false 00:05:00.795 } 00:05:00.795 } 00:05:00.795 ] 00:05:00.795 }, 00:05:00.795 { 00:05:00.795 "subsystem": "vmd", 00:05:00.795 "config": [] 00:05:00.795 }, 00:05:00.795 { 00:05:00.795 "subsystem": "accel", 00:05:00.795 "config": [ 00:05:00.795 { 00:05:00.795 "method": "accel_set_options", 00:05:00.795 "params": { 00:05:00.795 "small_cache_size": 128, 00:05:00.795 "large_cache_size": 16, 00:05:00.795 "task_count": 2048, 00:05:00.795 "sequence_count": 2048, 00:05:00.795 "buf_count": 2048 00:05:00.795 } 00:05:00.795 } 00:05:00.795 ] 00:05:00.795 }, 00:05:00.795 { 00:05:00.795 "subsystem": "bdev", 00:05:00.795 "config": [ 00:05:00.795 { 00:05:00.795 "method": "bdev_set_options", 00:05:00.795 "params": { 00:05:00.795 "bdev_io_pool_size": 65535, 00:05:00.795 "bdev_io_cache_size": 256, 00:05:00.795 "bdev_auto_examine": true, 00:05:00.795 "iobuf_small_cache_size": 128, 00:05:00.795 "iobuf_large_cache_size": 16 00:05:00.795 } 00:05:00.795 }, 00:05:00.795 { 00:05:00.795 "method": "bdev_raid_set_options", 00:05:00.795 "params": { 00:05:00.795 "process_window_size_kb": 1024, 00:05:00.795 "process_max_bandwidth_mb_sec": 0 00:05:00.795 } 00:05:00.795 }, 00:05:00.795 { 00:05:00.795 "method": "bdev_iscsi_set_options", 00:05:00.795 "params": { 00:05:00.795 "timeout_sec": 30 00:05:00.795 } 00:05:00.795 }, 00:05:00.795 { 00:05:00.795 "method": "bdev_nvme_set_options", 00:05:00.795 "params": { 00:05:00.795 "action_on_timeout": "none", 00:05:00.795 "timeout_us": 0, 00:05:00.795 "timeout_admin_us": 0, 00:05:00.795 "keep_alive_timeout_ms": 10000, 00:05:00.795 "arbitration_burst": 0, 00:05:00.795 "low_priority_weight": 0, 00:05:00.795 "medium_priority_weight": 0, 00:05:00.795 "high_priority_weight": 0, 00:05:00.795 "nvme_adminq_poll_period_us": 10000, 00:05:00.795 "nvme_ioq_poll_period_us": 0, 00:05:00.795 "io_queue_requests": 0, 00:05:00.795 "delay_cmd_submit": true, 00:05:00.795 "transport_retry_count": 4, 00:05:00.795 "bdev_retry_count": 3, 00:05:00.795 "transport_ack_timeout": 0, 00:05:00.795 "ctrlr_loss_timeout_sec": 0, 00:05:00.795 "reconnect_delay_sec": 0, 00:05:00.795 "fast_io_fail_timeout_sec": 0, 00:05:00.795 "disable_auto_failback": false, 00:05:00.795 "generate_uuids": false, 00:05:00.795 "transport_tos": 0, 00:05:00.795 "nvme_error_stat": false, 00:05:00.795 "rdma_srq_size": 0, 00:05:00.795 "io_path_stat": false, 00:05:00.795 "allow_accel_sequence": false, 00:05:00.795 "rdma_max_cq_size": 0, 00:05:00.795 "rdma_cm_event_timeout_ms": 0, 00:05:00.795 "dhchap_digests": [ 00:05:00.795 "sha256", 00:05:00.795 "sha384", 00:05:00.795 "sha512" 00:05:00.795 ], 00:05:00.796 "dhchap_dhgroups": [ 00:05:00.796 "null", 00:05:00.796 "ffdhe2048", 00:05:00.796 "ffdhe3072", 00:05:00.796 "ffdhe4096", 00:05:00.796 "ffdhe6144", 00:05:00.796 "ffdhe8192" 00:05:00.796 ] 00:05:00.796 } 00:05:00.796 }, 00:05:00.796 { 00:05:00.796 "method": "bdev_nvme_set_hotplug", 00:05:00.796 "params": { 00:05:00.796 "period_us": 100000, 00:05:00.796 "enable": false 00:05:00.796 } 00:05:00.796 }, 00:05:00.796 { 00:05:00.796 "method": "bdev_wait_for_examine" 00:05:00.796 } 00:05:00.796 ] 00:05:00.796 }, 00:05:00.796 { 00:05:00.796 "subsystem": "scsi", 00:05:00.796 "config": null 00:05:00.796 }, 00:05:00.796 { 00:05:00.796 "subsystem": "scheduler", 00:05:00.796 "config": [ 00:05:00.796 { 00:05:00.796 "method": "framework_set_scheduler", 00:05:00.796 "params": { 00:05:00.796 "name": "static" 00:05:00.796 } 00:05:00.796 } 00:05:00.796 ] 00:05:00.796 }, 00:05:00.796 { 00:05:00.796 "subsystem": "vhost_scsi", 00:05:00.796 "config": [] 00:05:00.796 }, 00:05:00.796 { 00:05:00.796 "subsystem": "vhost_blk", 00:05:00.796 "config": [] 00:05:00.796 }, 00:05:00.796 { 00:05:00.796 "subsystem": "ublk", 00:05:00.796 "config": [] 00:05:00.796 }, 00:05:00.796 { 00:05:00.796 "subsystem": "nbd", 00:05:00.796 "config": [] 00:05:00.796 }, 00:05:00.796 { 00:05:00.796 "subsystem": "nvmf", 00:05:00.796 "config": [ 00:05:00.796 { 00:05:00.796 "method": "nvmf_set_config", 00:05:00.796 "params": { 00:05:00.796 "discovery_filter": "match_any", 00:05:00.796 "admin_cmd_passthru": { 00:05:00.796 "identify_ctrlr": false 00:05:00.796 } 00:05:00.796 } 00:05:00.796 }, 00:05:00.796 { 00:05:00.796 "method": "nvmf_set_max_subsystems", 00:05:00.796 "params": { 00:05:00.796 "max_subsystems": 1024 00:05:00.796 } 00:05:00.796 }, 00:05:00.796 { 00:05:00.796 "method": "nvmf_set_crdt", 00:05:00.796 "params": { 00:05:00.796 "crdt1": 0, 00:05:00.796 "crdt2": 0, 00:05:00.796 "crdt3": 0 00:05:00.796 } 00:05:00.796 }, 00:05:00.796 { 00:05:00.796 "method": "nvmf_create_transport", 00:05:00.796 "params": { 00:05:00.796 "trtype": "TCP", 00:05:00.796 "max_queue_depth": 128, 00:05:00.796 "max_io_qpairs_per_ctrlr": 127, 00:05:00.796 "in_capsule_data_size": 4096, 00:05:00.796 "max_io_size": 131072, 00:05:00.796 "io_unit_size": 131072, 00:05:00.796 "max_aq_depth": 128, 00:05:00.796 "num_shared_buffers": 511, 00:05:00.796 "buf_cache_size": 4294967295, 00:05:00.796 "dif_insert_or_strip": false, 00:05:00.796 "zcopy": false, 00:05:00.796 "c2h_success": true, 00:05:00.796 "sock_priority": 0, 00:05:00.796 "abort_timeout_sec": 1, 00:05:00.796 "ack_timeout": 0, 00:05:00.796 "data_wr_pool_size": 0 00:05:00.796 } 00:05:00.796 } 00:05:00.796 ] 00:05:00.796 }, 00:05:00.796 { 00:05:00.796 "subsystem": "iscsi", 00:05:00.796 "config": [ 00:05:00.796 { 00:05:00.796 "method": "iscsi_set_options", 00:05:00.796 "params": { 00:05:00.796 "node_base": "iqn.2016-06.io.spdk", 00:05:00.796 "max_sessions": 128, 00:05:00.796 "max_connections_per_session": 2, 00:05:00.796 "max_queue_depth": 64, 00:05:00.796 "default_time2wait": 2, 00:05:00.796 "default_time2retain": 20, 00:05:00.796 "first_burst_length": 8192, 00:05:00.796 "immediate_data": true, 00:05:00.796 "allow_duplicated_isid": false, 00:05:00.796 "error_recovery_level": 0, 00:05:00.796 "nop_timeout": 60, 00:05:00.796 "nop_in_interval": 30, 00:05:00.796 "disable_chap": false, 00:05:00.796 "require_chap": false, 00:05:00.796 "mutual_chap": false, 00:05:00.796 "chap_group": 0, 00:05:00.796 "max_large_datain_per_connection": 64, 00:05:00.796 "max_r2t_per_connection": 4, 00:05:00.796 "pdu_pool_size": 36864, 00:05:00.796 "immediate_data_pool_size": 16384, 00:05:00.796 "data_out_pool_size": 2048 00:05:00.796 } 00:05:00.796 } 00:05:00.796 ] 00:05:00.796 } 00:05:00.796 ] 00:05:00.796 } 00:05:00.796 10:05:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:00.796 10:05:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59129 00:05:00.796 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59129 ']' 00:05:00.796 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59129 00:05:00.796 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:00.796 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:00.796 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59129 00:05:00.796 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:00.796 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:00.796 killing process with pid 59129 00:05:00.796 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59129' 00:05:00.796 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59129 00:05:00.796 10:05:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59129 00:05:01.054 10:05:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:01.054 10:05:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59151 00:05:01.054 10:05:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:06.317 10:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59151 00:05:06.317 10:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59151 ']' 00:05:06.317 10:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59151 00:05:06.317 10:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:06.317 10:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.317 10:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59151 00:05:06.317 10:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.317 killing process with pid 59151 00:05:06.317 10:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.317 10:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59151' 00:05:06.317 10:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59151 00:05:06.317 10:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59151 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:06.576 00:05:06.576 real 0m6.962s 00:05:06.576 user 0m6.741s 00:05:06.576 sys 0m0.610s 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.576 ************************************ 00:05:06.576 END TEST skip_rpc_with_json 00:05:06.576 ************************************ 00:05:06.576 10:05:39 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:06.576 10:05:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:06.576 10:05:39 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.576 10:05:39 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.576 10:05:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.576 ************************************ 00:05:06.576 START TEST skip_rpc_with_delay 00:05:06.576 ************************************ 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:06.576 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.576 [2024-07-25 10:05:39.817241] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:06.576 [2024-07-25 10:05:39.817386] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:06.835 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:06.835 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:06.835 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:06.835 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:06.835 00:05:06.835 real 0m0.114s 00:05:06.835 user 0m0.063s 00:05:06.835 sys 0m0.049s 00:05:06.835 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.835 ************************************ 00:05:06.835 END TEST skip_rpc_with_delay 00:05:06.835 ************************************ 00:05:06.835 10:05:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:06.835 10:05:39 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:06.835 10:05:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:06.835 10:05:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:06.835 10:05:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:06.835 10:05:39 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.835 10:05:39 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.835 10:05:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.835 ************************************ 00:05:06.835 START TEST exit_on_failed_rpc_init 00:05:06.835 ************************************ 00:05:06.835 10:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:06.835 10:05:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59266 00:05:06.835 10:05:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.835 10:05:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59266 00:05:06.835 10:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59266 ']' 00:05:06.835 10:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.835 10:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.835 10:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.835 10:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.835 10:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.835 [2024-07-25 10:05:39.988340] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:06.835 [2024-07-25 10:05:39.988457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59266 ] 00:05:07.094 [2024-07-25 10:05:40.129392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.094 [2024-07-25 10:05:40.231547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.660 10:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.660 10:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:07.660 10:05:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.660 10:05:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:07.660 10:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:07.660 10:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:07.660 10:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:07.660 10:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.660 10:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:07.660 10:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.660 10:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:07.660 10:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.660 10:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:07.660 10:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:07.660 10:05:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:07.918 [2024-07-25 10:05:40.945654] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:07.918 [2024-07-25 10:05:40.945760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59284 ] 00:05:07.918 [2024-07-25 10:05:41.087629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.176 [2024-07-25 10:05:41.186275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.176 [2024-07-25 10:05:41.186351] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:08.176 [2024-07-25 10:05:41.186363] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:08.176 [2024-07-25 10:05:41.186372] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59266 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59266 ']' 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59266 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59266 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59266' 00:05:08.176 killing process with pid 59266 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59266 00:05:08.176 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59266 00:05:08.435 00:05:08.435 real 0m1.722s 00:05:08.435 user 0m1.990s 00:05:08.435 sys 0m0.397s 00:05:08.435 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.435 10:05:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:08.435 ************************************ 00:05:08.435 END TEST exit_on_failed_rpc_init 00:05:08.435 ************************************ 00:05:08.435 10:05:41 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:08.435 10:05:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:08.435 00:05:08.435 real 0m14.511s 00:05:08.435 user 0m13.926s 00:05:08.435 sys 0m1.538s 00:05:08.435 10:05:41 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.435 10:05:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.435 ************************************ 00:05:08.435 END TEST skip_rpc 00:05:08.435 ************************************ 00:05:08.693 10:05:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:08.693 10:05:41 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:08.693 10:05:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.693 10:05:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.693 10:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:08.693 ************************************ 00:05:08.693 START TEST rpc_client 00:05:08.693 ************************************ 00:05:08.693 10:05:41 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:08.693 * Looking for test storage... 00:05:08.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:08.693 10:05:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:08.693 OK 00:05:08.693 10:05:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:08.693 00:05:08.693 real 0m0.101s 00:05:08.693 user 0m0.046s 00:05:08.693 sys 0m0.065s 00:05:08.693 10:05:41 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.693 10:05:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:08.693 ************************************ 00:05:08.693 END TEST rpc_client 00:05:08.693 ************************************ 00:05:08.693 10:05:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:08.693 10:05:41 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:08.693 10:05:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.693 10:05:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.693 10:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:08.693 ************************************ 00:05:08.693 START TEST json_config 00:05:08.693 ************************************ 00:05:08.693 10:05:41 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:08.693 10:05:41 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:08.693 10:05:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:08.693 10:05:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.693 10:05:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.693 10:05:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.693 10:05:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.694 10:05:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.694 10:05:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.694 10:05:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.694 10:05:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.694 10:05:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.694 10:05:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.694 10:05:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:828469e8-3269-4fb6-840b-068387b38e35 00:05:08.694 10:05:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=828469e8-3269-4fb6-840b-068387b38e35 00:05:08.694 10:05:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.694 10:05:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.694 10:05:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:08.694 10:05:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.694 10:05:41 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:08.952 10:05:41 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.952 10:05:41 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.952 10:05:41 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.952 10:05:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.952 10:05:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.952 10:05:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.952 10:05:41 json_config -- paths/export.sh@5 -- # export PATH 00:05:08.952 10:05:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.952 10:05:41 json_config -- nvmf/common.sh@47 -- # : 0 00:05:08.952 10:05:41 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:08.952 10:05:41 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:08.952 10:05:41 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.952 10:05:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.952 10:05:41 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.952 10:05:41 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:08.952 10:05:41 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:08.952 10:05:41 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@11 -- # [[ 1 -eq 1 ]] 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:05:08.952 10:05:41 json_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:08.952 INFO: JSON configuration test init 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:08.952 10:05:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.952 10:05:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:08.952 10:05:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.952 10:05:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.952 10:05:41 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:08.952 10:05:41 json_config -- json_config/common.sh@9 -- # local app=target 00:05:08.952 10:05:41 json_config -- json_config/common.sh@10 -- # shift 00:05:08.952 10:05:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:08.952 10:05:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:08.952 10:05:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:08.952 10:05:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.952 10:05:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.952 10:05:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59402 00:05:08.952 10:05:41 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:08.952 Waiting for target to run... 00:05:08.952 10:05:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:08.952 10:05:41 json_config -- json_config/common.sh@25 -- # waitforlisten 59402 /var/tmp/spdk_tgt.sock 00:05:08.952 10:05:41 json_config -- common/autotest_common.sh@829 -- # '[' -z 59402 ']' 00:05:08.952 10:05:41 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:08.952 10:05:41 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:08.953 10:05:41 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:08.953 10:05:41 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.953 10:05:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.953 [2024-07-25 10:05:42.028394] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:08.953 [2024-07-25 10:05:42.028483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59402 ] 00:05:09.210 [2024-07-25 10:05:42.377885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.210 [2024-07-25 10:05:42.456774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.776 10:05:42 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.776 00:05:09.776 10:05:42 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:09.776 10:05:42 json_config -- json_config/common.sh@26 -- # echo '' 00:05:09.776 10:05:42 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:09.776 10:05:42 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:09.776 10:05:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.776 10:05:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.776 10:05:43 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:09.776 10:05:43 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:09.776 10:05:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:09.776 10:05:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.034 10:05:43 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:10.034 10:05:43 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:10.034 10:05:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:10.291 10:05:43 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:10.291 10:05:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:10.291 10:05:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:10.291 10:05:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.291 10:05:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:10.291 10:05:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:10.291 10:05:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:10.291 10:05:43 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:10.291 10:05:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:10.291 10:05:43 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:10.549 10:05:43 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:10.549 10:05:43 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:10.549 10:05:43 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:10.549 10:05:43 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:10.549 10:05:43 json_config -- json_config/json_config.sh@51 -- # sort 00:05:10.549 10:05:43 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:10.549 10:05:43 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:10.549 10:05:43 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:10.549 10:05:43 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:10.549 10:05:43 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:10.549 10:05:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:10.549 10:05:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.806 10:05:43 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:10.806 10:05:43 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:10.806 10:05:43 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:10.806 10:05:43 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:10.806 10:05:43 json_config -- json_config/json_config.sh@291 -- # create_iscsi_subsystem_config 00:05:10.806 10:05:43 json_config -- json_config/json_config.sh@225 -- # timing_enter create_iscsi_subsystem_config 00:05:10.806 10:05:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:10.806 10:05:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.806 10:05:43 json_config -- json_config/json_config.sh@226 -- # tgt_rpc bdev_malloc_create 64 1024 --name MallocForIscsi0 00:05:10.806 10:05:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 64 1024 --name MallocForIscsi0 00:05:11.064 MallocForIscsi0 00:05:11.064 10:05:44 json_config -- json_config/json_config.sh@227 -- # tgt_rpc iscsi_create_portal_group 1 127.0.0.1:3260 00:05:11.064 10:05:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_portal_group 1 127.0.0.1:3260 00:05:11.322 10:05:44 json_config -- json_config/json_config.sh@228 -- # tgt_rpc iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:05:11.322 10:05:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:05:11.580 10:05:44 json_config -- json_config/json_config.sh@229 -- # tgt_rpc iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:05:11.580 10:05:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:05:11.580 10:05:44 json_config -- json_config/json_config.sh@230 -- # timing_exit create_iscsi_subsystem_config 00:05:11.580 10:05:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.580 10:05:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.580 10:05:44 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:05:11.580 10:05:44 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:11.580 10:05:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.580 10:05:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.838 10:05:44 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:11.838 10:05:44 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.838 10:05:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:12.096 MallocBdevForConfigChangeCheck 00:05:12.096 10:05:45 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:12.096 10:05:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.096 10:05:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.096 10:05:45 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:12.096 10:05:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.353 10:05:45 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:12.353 INFO: shutting down applications... 00:05:12.353 10:05:45 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:12.353 10:05:45 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:12.353 10:05:45 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:12.353 10:05:45 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:12.611 Calling clear_iscsi_subsystem 00:05:12.611 Calling clear_nvmf_subsystem 00:05:12.611 Calling clear_nbd_subsystem 00:05:12.611 Calling clear_ublk_subsystem 00:05:12.611 Calling clear_vhost_blk_subsystem 00:05:12.611 Calling clear_vhost_scsi_subsystem 00:05:12.611 Calling clear_bdev_subsystem 00:05:12.611 10:05:45 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:12.611 10:05:45 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:12.611 10:05:45 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:12.611 10:05:45 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:12.611 10:05:45 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.611 10:05:45 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:12.872 10:05:46 json_config -- json_config/json_config.sh@349 -- # break 00:05:12.872 10:05:46 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:12.872 10:05:46 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:12.872 10:05:46 json_config -- json_config/common.sh@31 -- # local app=target 00:05:12.872 10:05:46 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:12.872 10:05:46 json_config -- json_config/common.sh@35 -- # [[ -n 59402 ]] 00:05:12.872 10:05:46 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59402 00:05:12.872 10:05:46 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:12.872 10:05:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.872 10:05:46 json_config -- json_config/common.sh@41 -- # kill -0 59402 00:05:12.872 10:05:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.440 10:05:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.440 10:05:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.440 10:05:46 json_config -- json_config/common.sh@41 -- # kill -0 59402 00:05:13.440 10:05:46 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:13.440 10:05:46 json_config -- json_config/common.sh@43 -- # break 00:05:13.440 10:05:46 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:13.440 SPDK target shutdown done 00:05:13.440 INFO: relaunching applications... 00:05:13.440 10:05:46 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:13.440 10:05:46 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:13.440 10:05:46 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:13.440 10:05:46 json_config -- json_config/common.sh@9 -- # local app=target 00:05:13.440 10:05:46 json_config -- json_config/common.sh@10 -- # shift 00:05:13.440 10:05:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.440 10:05:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.440 10:05:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.440 10:05:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.440 10:05:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.440 10:05:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59578 00:05:13.440 Waiting for target to run... 00:05:13.440 10:05:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.440 10:05:46 json_config -- json_config/common.sh@25 -- # waitforlisten 59578 /var/tmp/spdk_tgt.sock 00:05:13.440 10:05:46 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:13.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.440 10:05:46 json_config -- common/autotest_common.sh@829 -- # '[' -z 59578 ']' 00:05:13.440 10:05:46 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.440 10:05:46 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.440 10:05:46 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.440 10:05:46 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.440 10:05:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.440 [2024-07-25 10:05:46.689576] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:13.440 [2024-07-25 10:05:46.689655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59578 ] 00:05:14.006 [2024-07-25 10:05:47.033999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.006 [2024-07-25 10:05:47.114262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.571 00:05:14.571 INFO: Checking if target configuration is the same... 00:05:14.571 10:05:47 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.571 10:05:47 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:14.571 10:05:47 json_config -- json_config/common.sh@26 -- # echo '' 00:05:14.571 10:05:47 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:14.571 10:05:47 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:14.571 10:05:47 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:14.571 10:05:47 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:14.571 10:05:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.571 + '[' 2 -ne 2 ']' 00:05:14.571 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:14.571 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:14.571 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:14.571 +++ basename /dev/fd/62 00:05:14.571 ++ mktemp /tmp/62.XXX 00:05:14.571 + tmp_file_1=/tmp/62.5j0 00:05:14.571 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:14.571 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.571 + tmp_file_2=/tmp/spdk_tgt_config.json.JG3 00:05:14.571 + ret=0 00:05:14.571 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:14.828 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:14.828 + diff -u /tmp/62.5j0 /tmp/spdk_tgt_config.json.JG3 00:05:14.828 INFO: JSON config files are the same 00:05:14.828 + echo 'INFO: JSON config files are the same' 00:05:14.828 + rm /tmp/62.5j0 /tmp/spdk_tgt_config.json.JG3 00:05:14.828 + exit 0 00:05:14.828 INFO: changing configuration and checking if this can be detected... 00:05:14.828 10:05:47 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:14.828 10:05:47 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:14.828 10:05:47 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.828 10:05:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:15.085 10:05:48 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:15.085 10:05:48 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:15.085 10:05:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:15.085 + '[' 2 -ne 2 ']' 00:05:15.085 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:15.085 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:15.085 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:15.085 +++ basename /dev/fd/62 00:05:15.085 ++ mktemp /tmp/62.XXX 00:05:15.085 + tmp_file_1=/tmp/62.7Hs 00:05:15.085 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:15.086 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:15.086 + tmp_file_2=/tmp/spdk_tgt_config.json.KsX 00:05:15.086 + ret=0 00:05:15.086 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:15.343 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:15.601 + diff -u /tmp/62.7Hs /tmp/spdk_tgt_config.json.KsX 00:05:15.601 + ret=1 00:05:15.601 + echo '=== Start of file: /tmp/62.7Hs ===' 00:05:15.601 + cat /tmp/62.7Hs 00:05:15.601 + echo '=== End of file: /tmp/62.7Hs ===' 00:05:15.601 + echo '' 00:05:15.601 + echo '=== Start of file: /tmp/spdk_tgt_config.json.KsX ===' 00:05:15.601 + cat /tmp/spdk_tgt_config.json.KsX 00:05:15.601 + echo '=== End of file: /tmp/spdk_tgt_config.json.KsX ===' 00:05:15.601 + echo '' 00:05:15.601 + rm /tmp/62.7Hs /tmp/spdk_tgt_config.json.KsX 00:05:15.601 + exit 1 00:05:15.601 INFO: configuration change detected. 00:05:15.601 10:05:48 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:15.601 10:05:48 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:15.601 10:05:48 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:15.601 10:05:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:15.601 10:05:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.601 10:05:48 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:15.601 10:05:48 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:15.601 10:05:48 json_config -- json_config/json_config.sh@321 -- # [[ -n 59578 ]] 00:05:15.601 10:05:48 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:15.601 10:05:48 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:15.601 10:05:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:15.601 10:05:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.601 10:05:48 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:15.601 10:05:48 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:15.601 10:05:48 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:15.601 10:05:48 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:15.601 10:05:48 json_config -- json_config/json_config.sh@201 -- # [[ 1 -eq 1 ]] 00:05:15.601 10:05:48 json_config -- json_config/json_config.sh@202 -- # rbd_cleanup 00:05:15.601 10:05:48 json_config -- common/autotest_common.sh@1031 -- # hash ceph 00:05:15.601 10:05:48 json_config -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:05:15.601 + base_dir=/var/tmp/ceph 00:05:15.601 + image=/var/tmp/ceph/ceph_raw.img 00:05:15.601 + dev=/dev/loop200 00:05:15.601 + pkill -9 ceph 00:05:15.601 + sleep 3 00:05:18.920 + umount /dev/loop200p2 00:05:18.920 umount: /dev/loop200p2: no mount point specified. 00:05:18.920 + losetup -d /dev/loop200 00:05:18.920 losetup: /dev/loop200: failed to use device: No such device 00:05:18.920 + rm -rf /var/tmp/ceph 00:05:18.920 10:05:51 json_config -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:05:18.920 10:05:51 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:18.920 10:05:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.921 10:05:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.921 10:05:51 json_config -- json_config/json_config.sh@327 -- # killprocess 59578 00:05:18.921 10:05:51 json_config -- common/autotest_common.sh@948 -- # '[' -z 59578 ']' 00:05:18.921 10:05:51 json_config -- common/autotest_common.sh@952 -- # kill -0 59578 00:05:18.921 10:05:51 json_config -- common/autotest_common.sh@953 -- # uname 00:05:18.921 10:05:51 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.921 10:05:51 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59578 00:05:18.921 killing process with pid 59578 00:05:18.921 10:05:51 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.921 10:05:51 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.921 10:05:51 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59578' 00:05:18.921 10:05:51 json_config -- common/autotest_common.sh@967 -- # kill 59578 00:05:18.921 10:05:51 json_config -- common/autotest_common.sh@972 -- # wait 59578 00:05:18.921 10:05:52 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:18.921 10:05:52 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:18.921 10:05:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.921 10:05:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.921 INFO: Success 00:05:18.921 10:05:52 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:18.921 10:05:52 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:18.921 ************************************ 00:05:18.921 END TEST json_config 00:05:18.921 ************************************ 00:05:18.921 00:05:18.921 real 0m10.201s 00:05:18.921 user 0m12.708s 00:05:18.921 sys 0m1.725s 00:05:18.921 10:05:52 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.921 10:05:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.921 10:05:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:18.921 10:05:52 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:18.921 10:05:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.921 10:05:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.921 10:05:52 -- common/autotest_common.sh@10 -- # set +x 00:05:18.921 ************************************ 00:05:18.921 START TEST json_config_extra_key 00:05:18.921 ************************************ 00:05:18.921 10:05:52 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:19.179 10:05:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:828469e8-3269-4fb6-840b-068387b38e35 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=828469e8-3269-4fb6-840b-068387b38e35 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:19.179 10:05:52 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.179 10:05:52 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.179 10:05:52 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.179 10:05:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.179 10:05:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.179 10:05:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.179 10:05:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:19.179 10:05:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:19.179 10:05:52 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:19.179 10:05:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:19.179 10:05:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:19.179 10:05:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:19.179 10:05:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:19.179 10:05:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:19.179 10:05:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:19.179 10:05:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:19.179 10:05:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:19.179 10:05:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:19.179 10:05:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.179 INFO: launching applications... 00:05:19.179 10:05:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:19.179 10:05:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:19.179 10:05:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:19.179 10:05:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:19.179 10:05:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.179 10:05:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.179 10:05:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.179 10:05:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.179 10:05:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.179 Waiting for target to run... 00:05:19.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.179 10:05:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59759 00:05:19.179 10:05:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.179 10:05:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59759 /var/tmp/spdk_tgt.sock 00:05:19.179 10:05:52 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59759 ']' 00:05:19.179 10:05:52 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.179 10:05:52 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.179 10:05:52 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:19.179 10:05:52 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.179 10:05:52 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.179 10:05:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:19.179 [2024-07-25 10:05:52.295557] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:19.179 [2024-07-25 10:05:52.295659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59759 ] 00:05:19.437 [2024-07-25 10:05:52.675229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.695 [2024-07-25 10:05:52.768334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.953 00:05:19.953 INFO: shutting down applications... 00:05:19.953 10:05:53 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.953 10:05:53 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:19.953 10:05:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:19.953 10:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:19.953 10:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:19.953 10:05:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:19.953 10:05:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:19.953 10:05:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59759 ]] 00:05:19.953 10:05:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59759 00:05:19.953 10:05:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:19.953 10:05:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.953 10:05:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59759 00:05:19.953 10:05:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.519 10:05:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.519 10:05:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.519 10:05:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59759 00:05:20.519 SPDK target shutdown done 00:05:20.519 Success 00:05:20.519 10:05:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:20.519 10:05:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:20.519 10:05:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:20.519 10:05:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:20.519 10:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:20.519 ************************************ 00:05:20.519 END TEST json_config_extra_key 00:05:20.519 ************************************ 00:05:20.519 00:05:20.519 real 0m1.543s 00:05:20.519 user 0m1.307s 00:05:20.519 sys 0m0.412s 00:05:20.519 10:05:53 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.519 10:05:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:20.519 10:05:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:20.519 10:05:53 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.519 10:05:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.519 10:05:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.519 10:05:53 -- common/autotest_common.sh@10 -- # set +x 00:05:20.519 ************************************ 00:05:20.519 START TEST alias_rpc 00:05:20.519 ************************************ 00:05:20.519 10:05:53 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.777 * Looking for test storage... 00:05:20.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:20.778 10:05:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:20.778 10:05:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59823 00:05:20.778 10:05:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59823 00:05:20.778 10:05:53 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59823 ']' 00:05:20.778 10:05:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:20.778 10:05:53 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.778 10:05:53 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.778 10:05:53 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.778 10:05:53 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.778 10:05:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.778 [2024-07-25 10:05:53.903045] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:20.778 [2024-07-25 10:05:53.903148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59823 ] 00:05:21.037 [2024-07-25 10:05:54.047044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.037 [2024-07-25 10:05:54.134370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.603 10:05:54 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.603 10:05:54 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:21.603 10:05:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:21.867 10:05:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59823 00:05:21.867 10:05:55 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59823 ']' 00:05:21.867 10:05:55 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59823 00:05:21.867 10:05:55 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:21.867 10:05:55 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.867 10:05:55 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59823 00:05:21.867 killing process with pid 59823 00:05:21.867 10:05:55 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.867 10:05:55 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.867 10:05:55 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59823' 00:05:21.867 10:05:55 alias_rpc -- common/autotest_common.sh@967 -- # kill 59823 00:05:21.867 10:05:55 alias_rpc -- common/autotest_common.sh@972 -- # wait 59823 00:05:22.140 ************************************ 00:05:22.140 END TEST alias_rpc 00:05:22.140 ************************************ 00:05:22.140 00:05:22.140 real 0m1.634s 00:05:22.140 user 0m1.812s 00:05:22.140 sys 0m0.394s 00:05:22.140 10:05:55 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.140 10:05:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.398 10:05:55 -- common/autotest_common.sh@1142 -- # return 0 00:05:22.398 10:05:55 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:22.398 10:05:55 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:22.398 10:05:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.398 10:05:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.398 10:05:55 -- common/autotest_common.sh@10 -- # set +x 00:05:22.398 ************************************ 00:05:22.398 START TEST spdkcli_tcp 00:05:22.398 ************************************ 00:05:22.398 10:05:55 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:22.398 * Looking for test storage... 00:05:22.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:22.398 10:05:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:22.398 10:05:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:22.398 10:05:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:22.398 10:05:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:22.398 10:05:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:22.398 10:05:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:22.398 10:05:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:22.398 10:05:55 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.398 10:05:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.398 10:05:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59899 00:05:22.398 10:05:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59899 00:05:22.398 10:05:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:22.399 10:05:55 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59899 ']' 00:05:22.399 10:05:55 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.399 10:05:55 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.399 10:05:55 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.399 10:05:55 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.399 10:05:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.399 [2024-07-25 10:05:55.577927] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:22.399 [2024-07-25 10:05:55.577998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59899 ] 00:05:22.658 [2024-07-25 10:05:55.711437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.658 [2024-07-25 10:05:55.800674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.658 [2024-07-25 10:05:55.800676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.594 10:05:56 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.594 10:05:56 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:23.594 10:05:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59916 00:05:23.594 10:05:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:23.594 10:05:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:23.594 [ 00:05:23.594 "bdev_malloc_delete", 00:05:23.594 "bdev_malloc_create", 00:05:23.594 "bdev_null_resize", 00:05:23.594 "bdev_null_delete", 00:05:23.594 "bdev_null_create", 00:05:23.594 "bdev_nvme_cuse_unregister", 00:05:23.594 "bdev_nvme_cuse_register", 00:05:23.594 "bdev_opal_new_user", 00:05:23.594 "bdev_opal_set_lock_state", 00:05:23.594 "bdev_opal_delete", 00:05:23.594 "bdev_opal_get_info", 00:05:23.594 "bdev_opal_create", 00:05:23.594 "bdev_nvme_opal_revert", 00:05:23.594 "bdev_nvme_opal_init", 00:05:23.594 "bdev_nvme_send_cmd", 00:05:23.594 "bdev_nvme_get_path_iostat", 00:05:23.594 "bdev_nvme_get_mdns_discovery_info", 00:05:23.594 "bdev_nvme_stop_mdns_discovery", 00:05:23.594 "bdev_nvme_start_mdns_discovery", 00:05:23.594 "bdev_nvme_set_multipath_policy", 00:05:23.594 "bdev_nvme_set_preferred_path", 00:05:23.594 "bdev_nvme_get_io_paths", 00:05:23.594 "bdev_nvme_remove_error_injection", 00:05:23.594 "bdev_nvme_add_error_injection", 00:05:23.594 "bdev_nvme_get_discovery_info", 00:05:23.594 "bdev_nvme_stop_discovery", 00:05:23.594 "bdev_nvme_start_discovery", 00:05:23.594 "bdev_nvme_get_controller_health_info", 00:05:23.594 "bdev_nvme_disable_controller", 00:05:23.594 "bdev_nvme_enable_controller", 00:05:23.594 "bdev_nvme_reset_controller", 00:05:23.594 "bdev_nvme_get_transport_statistics", 00:05:23.594 "bdev_nvme_apply_firmware", 00:05:23.594 "bdev_nvme_detach_controller", 00:05:23.594 "bdev_nvme_get_controllers", 00:05:23.594 "bdev_nvme_attach_controller", 00:05:23.594 "bdev_nvme_set_hotplug", 00:05:23.594 "bdev_nvme_set_options", 00:05:23.594 "bdev_passthru_delete", 00:05:23.594 "bdev_passthru_create", 00:05:23.594 "bdev_lvol_set_parent_bdev", 00:05:23.594 "bdev_lvol_set_parent", 00:05:23.594 "bdev_lvol_check_shallow_copy", 00:05:23.594 "bdev_lvol_start_shallow_copy", 00:05:23.594 "bdev_lvol_grow_lvstore", 00:05:23.594 "bdev_lvol_get_lvols", 00:05:23.594 "bdev_lvol_get_lvstores", 00:05:23.594 "bdev_lvol_delete", 00:05:23.594 "bdev_lvol_set_read_only", 00:05:23.594 "bdev_lvol_resize", 00:05:23.594 "bdev_lvol_decouple_parent", 00:05:23.594 "bdev_lvol_inflate", 00:05:23.594 "bdev_lvol_rename", 00:05:23.594 "bdev_lvol_clone_bdev", 00:05:23.594 "bdev_lvol_clone", 00:05:23.594 "bdev_lvol_snapshot", 00:05:23.594 "bdev_lvol_create", 00:05:23.594 "bdev_lvol_delete_lvstore", 00:05:23.594 "bdev_lvol_rename_lvstore", 00:05:23.594 "bdev_lvol_create_lvstore", 00:05:23.594 "bdev_raid_set_options", 00:05:23.594 "bdev_raid_remove_base_bdev", 00:05:23.594 "bdev_raid_add_base_bdev", 00:05:23.594 "bdev_raid_delete", 00:05:23.594 "bdev_raid_create", 00:05:23.594 "bdev_raid_get_bdevs", 00:05:23.594 "bdev_error_inject_error", 00:05:23.594 "bdev_error_delete", 00:05:23.594 "bdev_error_create", 00:05:23.594 "bdev_split_delete", 00:05:23.594 "bdev_split_create", 00:05:23.594 "bdev_delay_delete", 00:05:23.594 "bdev_delay_create", 00:05:23.594 "bdev_delay_update_latency", 00:05:23.594 "bdev_zone_block_delete", 00:05:23.594 "bdev_zone_block_create", 00:05:23.594 "blobfs_create", 00:05:23.594 "blobfs_detect", 00:05:23.594 "blobfs_set_cache_size", 00:05:23.594 "bdev_aio_delete", 00:05:23.594 "bdev_aio_rescan", 00:05:23.594 "bdev_aio_create", 00:05:23.594 "bdev_ftl_set_property", 00:05:23.594 "bdev_ftl_get_properties", 00:05:23.594 "bdev_ftl_get_stats", 00:05:23.594 "bdev_ftl_unmap", 00:05:23.594 "bdev_ftl_unload", 00:05:23.594 "bdev_ftl_delete", 00:05:23.594 "bdev_ftl_load", 00:05:23.594 "bdev_ftl_create", 00:05:23.594 "bdev_virtio_attach_controller", 00:05:23.594 "bdev_virtio_scsi_get_devices", 00:05:23.594 "bdev_virtio_detach_controller", 00:05:23.594 "bdev_virtio_blk_set_hotplug", 00:05:23.594 "bdev_iscsi_delete", 00:05:23.594 "bdev_iscsi_create", 00:05:23.594 "bdev_iscsi_set_options", 00:05:23.594 "bdev_rbd_get_clusters_info", 00:05:23.594 "bdev_rbd_unregister_cluster", 00:05:23.594 "bdev_rbd_register_cluster", 00:05:23.594 "bdev_rbd_resize", 00:05:23.594 "bdev_rbd_delete", 00:05:23.594 "bdev_rbd_create", 00:05:23.594 "accel_error_inject_error", 00:05:23.594 "ioat_scan_accel_module", 00:05:23.594 "dsa_scan_accel_module", 00:05:23.594 "iaa_scan_accel_module", 00:05:23.594 "keyring_file_remove_key", 00:05:23.594 "keyring_file_add_key", 00:05:23.594 "keyring_linux_set_options", 00:05:23.594 "iscsi_get_histogram", 00:05:23.594 "iscsi_enable_histogram", 00:05:23.594 "iscsi_set_options", 00:05:23.594 "iscsi_get_auth_groups", 00:05:23.594 "iscsi_auth_group_remove_secret", 00:05:23.594 "iscsi_auth_group_add_secret", 00:05:23.594 "iscsi_delete_auth_group", 00:05:23.594 "iscsi_create_auth_group", 00:05:23.594 "iscsi_set_discovery_auth", 00:05:23.594 "iscsi_get_options", 00:05:23.594 "iscsi_target_node_request_logout", 00:05:23.594 "iscsi_target_node_set_redirect", 00:05:23.594 "iscsi_target_node_set_auth", 00:05:23.594 "iscsi_target_node_add_lun", 00:05:23.594 "iscsi_get_stats", 00:05:23.594 "iscsi_get_connections", 00:05:23.594 "iscsi_portal_group_set_auth", 00:05:23.594 "iscsi_start_portal_group", 00:05:23.594 "iscsi_delete_portal_group", 00:05:23.594 "iscsi_create_portal_group", 00:05:23.594 "iscsi_get_portal_groups", 00:05:23.594 "iscsi_delete_target_node", 00:05:23.594 "iscsi_target_node_remove_pg_ig_maps", 00:05:23.594 "iscsi_target_node_add_pg_ig_maps", 00:05:23.594 "iscsi_create_target_node", 00:05:23.594 "iscsi_get_target_nodes", 00:05:23.594 "iscsi_delete_initiator_group", 00:05:23.594 "iscsi_initiator_group_remove_initiators", 00:05:23.594 "iscsi_initiator_group_add_initiators", 00:05:23.594 "iscsi_create_initiator_group", 00:05:23.594 "iscsi_get_initiator_groups", 00:05:23.594 "nvmf_set_crdt", 00:05:23.594 "nvmf_set_config", 00:05:23.594 "nvmf_set_max_subsystems", 00:05:23.594 "nvmf_stop_mdns_prr", 00:05:23.594 "nvmf_publish_mdns_prr", 00:05:23.594 "nvmf_subsystem_get_listeners", 00:05:23.594 "nvmf_subsystem_get_qpairs", 00:05:23.594 "nvmf_subsystem_get_controllers", 00:05:23.594 "nvmf_get_stats", 00:05:23.594 "nvmf_get_transports", 00:05:23.594 "nvmf_create_transport", 00:05:23.594 "nvmf_get_targets", 00:05:23.594 "nvmf_delete_target", 00:05:23.594 "nvmf_create_target", 00:05:23.594 "nvmf_subsystem_allow_any_host", 00:05:23.594 "nvmf_subsystem_remove_host", 00:05:23.594 "nvmf_subsystem_add_host", 00:05:23.594 "nvmf_ns_remove_host", 00:05:23.594 "nvmf_ns_add_host", 00:05:23.594 "nvmf_subsystem_remove_ns", 00:05:23.594 "nvmf_subsystem_add_ns", 00:05:23.594 "nvmf_subsystem_listener_set_ana_state", 00:05:23.594 "nvmf_discovery_get_referrals", 00:05:23.594 "nvmf_discovery_remove_referral", 00:05:23.594 "nvmf_discovery_add_referral", 00:05:23.594 "nvmf_subsystem_remove_listener", 00:05:23.594 "nvmf_subsystem_add_listener", 00:05:23.594 "nvmf_delete_subsystem", 00:05:23.594 "nvmf_create_subsystem", 00:05:23.594 "nvmf_get_subsystems", 00:05:23.594 "env_dpdk_get_mem_stats", 00:05:23.594 "nbd_get_disks", 00:05:23.594 "nbd_stop_disk", 00:05:23.594 "nbd_start_disk", 00:05:23.594 "ublk_recover_disk", 00:05:23.595 "ublk_get_disks", 00:05:23.595 "ublk_stop_disk", 00:05:23.595 "ublk_start_disk", 00:05:23.595 "ublk_destroy_target", 00:05:23.595 "ublk_create_target", 00:05:23.595 "virtio_blk_create_transport", 00:05:23.595 "virtio_blk_get_transports", 00:05:23.595 "vhost_controller_set_coalescing", 00:05:23.595 "vhost_get_controllers", 00:05:23.595 "vhost_delete_controller", 00:05:23.595 "vhost_create_blk_controller", 00:05:23.595 "vhost_scsi_controller_remove_target", 00:05:23.595 "vhost_scsi_controller_add_target", 00:05:23.595 "vhost_start_scsi_controller", 00:05:23.595 "vhost_create_scsi_controller", 00:05:23.595 "thread_set_cpumask", 00:05:23.595 "framework_get_governor", 00:05:23.595 "framework_get_scheduler", 00:05:23.595 "framework_set_scheduler", 00:05:23.595 "framework_get_reactors", 00:05:23.595 "thread_get_io_channels", 00:05:23.595 "thread_get_pollers", 00:05:23.595 "thread_get_stats", 00:05:23.595 "framework_monitor_context_switch", 00:05:23.595 "spdk_kill_instance", 00:05:23.595 "log_enable_timestamps", 00:05:23.595 "log_get_flags", 00:05:23.595 "log_clear_flag", 00:05:23.595 "log_set_flag", 00:05:23.595 "log_get_level", 00:05:23.595 "log_set_level", 00:05:23.595 "log_get_print_level", 00:05:23.595 "log_set_print_level", 00:05:23.595 "framework_enable_cpumask_locks", 00:05:23.595 "framework_disable_cpumask_locks", 00:05:23.595 "framework_wait_init", 00:05:23.595 "framework_start_init", 00:05:23.595 "scsi_get_devices", 00:05:23.595 "bdev_get_histogram", 00:05:23.595 "bdev_enable_histogram", 00:05:23.595 "bdev_set_qos_limit", 00:05:23.595 "bdev_set_qd_sampling_period", 00:05:23.595 "bdev_get_bdevs", 00:05:23.595 "bdev_reset_iostat", 00:05:23.595 "bdev_get_iostat", 00:05:23.595 "bdev_examine", 00:05:23.595 "bdev_wait_for_examine", 00:05:23.595 "bdev_set_options", 00:05:23.595 "notify_get_notifications", 00:05:23.595 "notify_get_types", 00:05:23.595 "accel_get_stats", 00:05:23.595 "accel_set_options", 00:05:23.595 "accel_set_driver", 00:05:23.595 "accel_crypto_key_destroy", 00:05:23.595 "accel_crypto_keys_get", 00:05:23.595 "accel_crypto_key_create", 00:05:23.595 "accel_assign_opc", 00:05:23.595 "accel_get_module_info", 00:05:23.595 "accel_get_opc_assignments", 00:05:23.595 "vmd_rescan", 00:05:23.595 "vmd_remove_device", 00:05:23.595 "vmd_enable", 00:05:23.595 "sock_get_default_impl", 00:05:23.595 "sock_set_default_impl", 00:05:23.595 "sock_impl_set_options", 00:05:23.595 "sock_impl_get_options", 00:05:23.595 "iobuf_get_stats", 00:05:23.595 "iobuf_set_options", 00:05:23.595 "framework_get_pci_devices", 00:05:23.595 "framework_get_config", 00:05:23.595 "framework_get_subsystems", 00:05:23.595 "trace_get_info", 00:05:23.595 "trace_get_tpoint_group_mask", 00:05:23.595 "trace_disable_tpoint_group", 00:05:23.595 "trace_enable_tpoint_group", 00:05:23.595 "trace_clear_tpoint_mask", 00:05:23.595 "trace_set_tpoint_mask", 00:05:23.595 "keyring_get_keys", 00:05:23.595 "spdk_get_version", 00:05:23.595 "rpc_get_methods" 00:05:23.595 ] 00:05:23.595 10:05:56 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:23.595 10:05:56 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.595 10:05:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.595 10:05:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:23.595 10:05:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59899 00:05:23.595 10:05:56 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59899 ']' 00:05:23.595 10:05:56 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59899 00:05:23.595 10:05:56 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:23.595 10:05:56 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.595 10:05:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59899 00:05:23.595 killing process with pid 59899 00:05:23.595 10:05:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.595 10:05:56 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.595 10:05:56 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59899' 00:05:23.595 10:05:56 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59899 00:05:23.595 10:05:56 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59899 00:05:24.160 ************************************ 00:05:24.160 END TEST spdkcli_tcp 00:05:24.160 ************************************ 00:05:24.160 00:05:24.160 real 0m1.745s 00:05:24.160 user 0m3.293s 00:05:24.160 sys 0m0.417s 00:05:24.160 10:05:57 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.160 10:05:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.160 10:05:57 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.160 10:05:57 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.160 10:05:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.160 10:05:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.160 10:05:57 -- common/autotest_common.sh@10 -- # set +x 00:05:24.160 ************************************ 00:05:24.160 START TEST dpdk_mem_utility 00:05:24.160 ************************************ 00:05:24.160 10:05:57 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.160 * Looking for test storage... 00:05:24.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:24.160 10:05:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:24.160 10:05:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59985 00:05:24.160 10:05:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59985 00:05:24.160 10:05:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.160 10:05:57 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59985 ']' 00:05:24.161 10:05:57 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.161 10:05:57 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.161 10:05:57 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.161 10:05:57 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.161 10:05:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.161 [2024-07-25 10:05:57.403921] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:24.161 [2024-07-25 10:05:57.404253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59985 ] 00:05:24.417 [2024-07-25 10:05:57.543913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.417 [2024-07-25 10:05:57.639252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.982 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.982 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:24.982 10:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:24.982 10:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:24.982 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.982 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.982 { 00:05:24.982 "filename": "/tmp/spdk_mem_dump.txt" 00:05:24.982 } 00:05:24.982 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.982 10:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:25.241 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:25.241 1 heaps totaling size 814.000000 MiB 00:05:25.241 size: 814.000000 MiB heap id: 0 00:05:25.241 end heaps---------- 00:05:25.241 8 mempools totaling size 598.116089 MiB 00:05:25.241 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:25.241 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:25.241 size: 84.521057 MiB name: bdev_io_59985 00:05:25.241 size: 51.011292 MiB name: evtpool_59985 00:05:25.241 size: 50.003479 MiB name: msgpool_59985 00:05:25.241 size: 21.763794 MiB name: PDU_Pool 00:05:25.241 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:25.241 size: 0.026123 MiB name: Session_Pool 00:05:25.241 end mempools------- 00:05:25.241 6 memzones totaling size 4.142822 MiB 00:05:25.241 size: 1.000366 MiB name: RG_ring_0_59985 00:05:25.241 size: 1.000366 MiB name: RG_ring_1_59985 00:05:25.241 size: 1.000366 MiB name: RG_ring_4_59985 00:05:25.241 size: 1.000366 MiB name: RG_ring_5_59985 00:05:25.241 size: 0.125366 MiB name: RG_ring_2_59985 00:05:25.241 size: 0.015991 MiB name: RG_ring_3_59985 00:05:25.241 end memzones------- 00:05:25.241 10:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:25.241 heap id: 0 total size: 814.000000 MiB number of busy elements: 291 number of free elements: 15 00:05:25.241 list of free elements. size: 12.473572 MiB 00:05:25.241 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:25.241 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:25.241 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:25.241 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:25.241 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:25.241 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:25.241 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:25.241 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:25.241 element at address: 0x200000200000 with size: 0.833191 MiB 00:05:25.241 element at address: 0x20001aa00000 with size: 0.570251 MiB 00:05:25.241 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:25.241 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:25.241 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:25.241 element at address: 0x200027e00000 with size: 0.395752 MiB 00:05:25.241 element at address: 0x200003a00000 with size: 0.348572 MiB 00:05:25.241 list of standard malloc elements. size: 199.263855 MiB 00:05:25.241 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:25.241 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:25.241 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:25.241 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:25.241 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:25.241 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:25.241 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:25.241 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:25.241 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:25.241 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:25.241 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:25.241 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:25.241 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:25.241 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:25.241 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:25.241 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:25.241 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:25.241 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:25.241 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:25.241 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:25.241 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:25.241 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:25.241 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:25.241 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:25.241 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:25.241 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:25.241 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:25.241 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:25.241 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:25.242 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:25.242 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:25.242 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:25.242 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e65500 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:25.242 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:25.243 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:25.243 list of memzone associated elements. size: 602.262573 MiB 00:05:25.243 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:25.243 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:25.243 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:25.243 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:25.243 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:25.243 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59985_0 00:05:25.243 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:25.243 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59985_0 00:05:25.243 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:25.243 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59985_0 00:05:25.243 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:25.243 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:25.243 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:25.243 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:25.243 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:25.243 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59985 00:05:25.243 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:25.243 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59985 00:05:25.243 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:25.243 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59985 00:05:25.243 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:25.243 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:25.243 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:25.243 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:25.243 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:25.243 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:25.243 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:25.243 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:25.243 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:25.243 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59985 00:05:25.243 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:25.243 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59985 00:05:25.243 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:25.243 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59985 00:05:25.243 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:25.243 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59985 00:05:25.243 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:25.243 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59985 00:05:25.243 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:25.243 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:25.243 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:25.243 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:25.243 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:25.243 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:25.243 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:25.243 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59985 00:05:25.243 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:25.243 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:25.243 element at address: 0x200027e65680 with size: 0.023743 MiB 00:05:25.243 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:25.243 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:25.243 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59985 00:05:25.243 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:05:25.243 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:25.243 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:25.243 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59985 00:05:25.243 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:25.243 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59985 00:05:25.243 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:05:25.243 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:25.243 10:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:25.243 10:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59985 00:05:25.243 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59985 ']' 00:05:25.243 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59985 00:05:25.243 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:25.243 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.243 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59985 00:05:25.243 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.243 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.243 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59985' 00:05:25.243 killing process with pid 59985 00:05:25.243 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59985 00:05:25.243 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59985 00:05:25.501 ************************************ 00:05:25.501 END TEST dpdk_mem_utility 00:05:25.501 ************************************ 00:05:25.501 00:05:25.501 real 0m1.454s 00:05:25.501 user 0m1.469s 00:05:25.501 sys 0m0.383s 00:05:25.501 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.501 10:05:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.501 10:05:58 -- common/autotest_common.sh@1142 -- # return 0 00:05:25.501 10:05:58 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.501 10:05:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.501 10:05:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.501 10:05:58 -- common/autotest_common.sh@10 -- # set +x 00:05:25.501 ************************************ 00:05:25.501 START TEST event 00:05:25.501 ************************************ 00:05:25.501 10:05:58 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.770 * Looking for test storage... 00:05:25.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:25.770 10:05:58 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:25.770 10:05:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:25.770 10:05:58 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.770 10:05:58 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:25.770 10:05:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.770 10:05:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.770 ************************************ 00:05:25.770 START TEST event_perf 00:05:25.770 ************************************ 00:05:25.770 10:05:58 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.770 Running I/O for 1 seconds...[2024-07-25 10:05:58.871939] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:25.770 [2024-07-25 10:05:58.872037] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60056 ] 00:05:25.770 [2024-07-25 10:05:59.011776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.028 [2024-07-25 10:05:59.109703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.028 [2024-07-25 10:05:59.109854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.028 [2024-07-25 10:05:59.110019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.028 [2024-07-25 10:05:59.110024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.960 Running I/O for 1 seconds... 00:05:26.960 lcore 0: 193047 00:05:26.960 lcore 1: 193047 00:05:26.960 lcore 2: 193046 00:05:26.960 lcore 3: 193047 00:05:26.960 done. 00:05:26.960 00:05:26.960 real 0m1.341s 00:05:26.960 user 0m4.152s 00:05:26.960 sys 0m0.066s 00:05:26.960 10:06:00 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.960 ************************************ 00:05:26.960 END TEST event_perf 00:05:26.960 ************************************ 00:05:26.960 10:06:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.218 10:06:00 event -- common/autotest_common.sh@1142 -- # return 0 00:05:27.218 10:06:00 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:27.218 10:06:00 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:27.218 10:06:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.218 10:06:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.218 ************************************ 00:05:27.218 START TEST event_reactor 00:05:27.218 ************************************ 00:05:27.218 10:06:00 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:27.218 [2024-07-25 10:06:00.271105] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:27.218 [2024-07-25 10:06:00.271177] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60099 ] 00:05:27.218 [2024-07-25 10:06:00.408203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.476 [2024-07-25 10:06:00.504991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.410 test_start 00:05:28.410 oneshot 00:05:28.410 tick 100 00:05:28.410 tick 100 00:05:28.410 tick 250 00:05:28.410 tick 100 00:05:28.410 tick 100 00:05:28.410 tick 100 00:05:28.410 tick 250 00:05:28.410 tick 500 00:05:28.410 tick 100 00:05:28.410 tick 100 00:05:28.410 tick 250 00:05:28.410 tick 100 00:05:28.410 tick 100 00:05:28.410 test_end 00:05:28.410 00:05:28.410 real 0m1.328s 00:05:28.410 user 0m1.172s 00:05:28.410 sys 0m0.049s 00:05:28.410 ************************************ 00:05:28.410 END TEST event_reactor 00:05:28.410 ************************************ 00:05:28.410 10:06:01 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.410 10:06:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:28.410 10:06:01 event -- common/autotest_common.sh@1142 -- # return 0 00:05:28.410 10:06:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.410 10:06:01 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:28.410 10:06:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.410 10:06:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.410 ************************************ 00:05:28.410 START TEST event_reactor_perf 00:05:28.410 ************************************ 00:05:28.410 10:06:01 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.410 [2024-07-25 10:06:01.653352] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:28.410 [2024-07-25 10:06:01.653467] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60133 ] 00:05:28.670 [2024-07-25 10:06:01.795471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.670 [2024-07-25 10:06:01.892621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.045 test_start 00:05:30.045 test_end 00:05:30.045 Performance: 474389 events per second 00:05:30.045 00:05:30.045 real 0m1.341s 00:05:30.045 user 0m1.180s 00:05:30.045 sys 0m0.054s 00:05:30.045 10:06:02 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.045 10:06:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.045 ************************************ 00:05:30.045 END TEST event_reactor_perf 00:05:30.045 ************************************ 00:05:30.045 10:06:03 event -- common/autotest_common.sh@1142 -- # return 0 00:05:30.045 10:06:03 event -- event/event.sh@49 -- # uname -s 00:05:30.045 10:06:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:30.045 10:06:03 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:30.045 10:06:03 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.045 10:06:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.045 10:06:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.045 ************************************ 00:05:30.045 START TEST event_scheduler 00:05:30.045 ************************************ 00:05:30.045 10:06:03 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:30.045 * Looking for test storage... 00:05:30.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:30.045 10:06:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:30.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.045 10:06:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60190 00:05:30.045 10:06:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.045 10:06:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:30.045 10:06:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60190 00:05:30.045 10:06:03 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60190 ']' 00:05:30.045 10:06:03 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.045 10:06:03 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.045 10:06:03 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.045 10:06:03 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.045 10:06:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.045 [2024-07-25 10:06:03.190834] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:30.045 [2024-07-25 10:06:03.190927] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60190 ] 00:05:30.302 [2024-07-25 10:06:03.331660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.302 [2024-07-25 10:06:03.441754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.302 [2024-07-25 10:06:03.441815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.302 [2024-07-25 10:06:03.441972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.302 [2024-07-25 10:06:03.441973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.233 10:06:04 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.233 10:06:04 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:31.233 10:06:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:31.233 10:06:04 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.233 10:06:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.233 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:31.233 POWER: Cannot set governor of lcore 0 to userspace 00:05:31.233 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:31.233 POWER: Cannot set governor of lcore 0 to performance 00:05:31.233 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:31.233 POWER: Cannot set governor of lcore 0 to userspace 00:05:31.233 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:31.233 POWER: Cannot set governor of lcore 0 to userspace 00:05:31.233 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:31.233 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:31.233 POWER: Unable to set Power Management Environment for lcore 0 00:05:31.233 [2024-07-25 10:06:04.144628] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:31.233 [2024-07-25 10:06:04.144668] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:31.233 [2024-07-25 10:06:04.144704] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:31.233 [2024-07-25 10:06:04.144739] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:31.234 [2024-07-25 10:06:04.144767] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:31.234 [2024-07-25 10:06:04.144838] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:31.234 10:06:04 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.234 10:06:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:31.234 10:06:04 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.234 10:06:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 [2024-07-25 10:06:04.220756] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:31.234 10:06:04 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.234 10:06:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:31.234 10:06:04 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.234 10:06:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.234 10:06:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 ************************************ 00:05:31.234 START TEST scheduler_create_thread 00:05:31.234 ************************************ 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 2 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 3 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 4 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 5 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 6 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 7 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 8 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 9 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 10 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.234 10:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.609 10:06:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.609 10:06:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:32.609 10:06:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:32.609 10:06:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.609 10:06:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.982 ************************************ 00:05:33.982 END TEST scheduler_create_thread 00:05:33.982 ************************************ 00:05:33.982 10:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.982 00:05:33.982 real 0m2.615s 00:05:33.982 user 0m0.019s 00:05:33.982 sys 0m0.009s 00:05:33.982 10:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.982 10:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.982 10:06:06 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:33.982 10:06:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:33.982 10:06:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60190 00:05:33.982 10:06:06 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60190 ']' 00:05:33.982 10:06:06 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60190 00:05:33.982 10:06:06 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:33.982 10:06:06 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.982 10:06:06 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60190 00:05:33.982 10:06:06 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:33.982 10:06:06 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:33.982 10:06:06 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60190' 00:05:33.982 killing process with pid 60190 00:05:33.982 10:06:06 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60190 00:05:33.982 10:06:06 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60190 00:05:34.241 [2024-07-25 10:06:07.328136] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:34.499 ************************************ 00:05:34.499 END TEST event_scheduler 00:05:34.499 ************************************ 00:05:34.499 00:05:34.499 real 0m4.501s 00:05:34.499 user 0m8.429s 00:05:34.499 sys 0m0.364s 00:05:34.499 10:06:07 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.499 10:06:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.499 10:06:07 event -- common/autotest_common.sh@1142 -- # return 0 00:05:34.499 10:06:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:34.499 10:06:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:34.499 10:06:07 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.499 10:06:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.499 10:06:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.499 ************************************ 00:05:34.499 START TEST app_repeat 00:05:34.499 ************************************ 00:05:34.499 10:06:07 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:34.499 10:06:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.499 10:06:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.499 10:06:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:34.499 10:06:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.499 10:06:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:34.499 10:06:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:34.499 10:06:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:34.499 10:06:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60290 00:05:34.500 10:06:07 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:34.500 10:06:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.500 10:06:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60290' 00:05:34.500 Process app_repeat pid: 60290 00:05:34.500 10:06:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.500 spdk_app_start Round 0 00:05:34.500 10:06:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:34.500 10:06:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60290 /var/tmp/spdk-nbd.sock 00:05:34.500 10:06:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60290 ']' 00:05:34.500 10:06:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.500 10:06:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.500 10:06:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.500 10:06:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.500 10:06:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.500 [2024-07-25 10:06:07.629188] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:34.500 [2024-07-25 10:06:07.629254] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60290 ] 00:05:34.758 [2024-07-25 10:06:07.761112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.758 [2024-07-25 10:06:07.858638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.758 [2024-07-25 10:06:07.858644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.758 10:06:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.758 10:06:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:34.758 10:06:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.015 Malloc0 00:05:35.015 10:06:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.273 Malloc1 00:05:35.273 10:06:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.273 10:06:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.273 10:06:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.273 10:06:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.273 10:06:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.273 10:06:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.273 10:06:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.273 10:06:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.273 10:06:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.273 10:06:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.273 10:06:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.273 10:06:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.273 10:06:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.273 10:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.273 10:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.273 10:06:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.531 /dev/nbd0 00:05:35.531 10:06:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.531 10:06:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.531 10:06:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:35.531 10:06:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:35.531 10:06:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:35.531 10:06:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:35.531 10:06:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:35.531 10:06:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:35.531 10:06:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:35.531 10:06:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:35.531 10:06:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.531 1+0 records in 00:05:35.531 1+0 records out 00:05:35.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626026 s, 6.5 MB/s 00:05:35.531 10:06:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.531 10:06:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:35.531 10:06:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.531 10:06:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:35.531 10:06:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:35.531 10:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.531 10:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.531 10:06:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.789 /dev/nbd1 00:05:35.789 10:06:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.789 10:06:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.789 10:06:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:35.789 10:06:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:35.789 10:06:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:35.789 10:06:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:35.789 10:06:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:35.789 10:06:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:35.789 10:06:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:35.789 10:06:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:35.789 10:06:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.789 1+0 records in 00:05:35.789 1+0 records out 00:05:35.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171237 s, 23.9 MB/s 00:05:35.789 10:06:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.789 10:06:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:35.789 10:06:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.789 10:06:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:35.789 10:06:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:35.789 10:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.789 10:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.789 10:06:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.789 10:06:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.789 10:06:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.048 { 00:05:36.048 "nbd_device": "/dev/nbd0", 00:05:36.048 "bdev_name": "Malloc0" 00:05:36.048 }, 00:05:36.048 { 00:05:36.048 "nbd_device": "/dev/nbd1", 00:05:36.048 "bdev_name": "Malloc1" 00:05:36.048 } 00:05:36.048 ]' 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.048 { 00:05:36.048 "nbd_device": "/dev/nbd0", 00:05:36.048 "bdev_name": "Malloc0" 00:05:36.048 }, 00:05:36.048 { 00:05:36.048 "nbd_device": "/dev/nbd1", 00:05:36.048 "bdev_name": "Malloc1" 00:05:36.048 } 00:05:36.048 ]' 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.048 /dev/nbd1' 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.048 /dev/nbd1' 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.048 256+0 records in 00:05:36.048 256+0 records out 00:05:36.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107984 s, 97.1 MB/s 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.048 256+0 records in 00:05:36.048 256+0 records out 00:05:36.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027071 s, 38.7 MB/s 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.048 256+0 records in 00:05:36.048 256+0 records out 00:05:36.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024891 s, 42.1 MB/s 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.048 10:06:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.306 10:06:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.306 10:06:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.306 10:06:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.306 10:06:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.306 10:06:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.306 10:06:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.306 10:06:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.306 10:06:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.306 10:06:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.306 10:06:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.564 10:06:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.564 10:06:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.564 10:06:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.564 10:06:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.564 10:06:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.564 10:06:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.564 10:06:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.564 10:06:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.564 10:06:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.564 10:06:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.564 10:06:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.822 10:06:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.822 10:06:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.822 10:06:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.822 10:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.822 10:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.822 10:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.822 10:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.822 10:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.822 10:06:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.822 10:06:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.822 10:06:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.822 10:06:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.822 10:06:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.386 10:06:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.386 [2024-07-25 10:06:10.498265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.386 [2024-07-25 10:06:10.595137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.386 [2024-07-25 10:06:10.595144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.386 [2024-07-25 10:06:10.636393] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.386 [2024-07-25 10:06:10.636448] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.661 spdk_app_start Round 1 00:05:40.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.661 10:06:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.661 10:06:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:40.661 10:06:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60290 /var/tmp/spdk-nbd.sock 00:05:40.661 10:06:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60290 ']' 00:05:40.661 10:06:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.661 10:06:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.661 10:06:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.661 10:06:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.661 10:06:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.661 10:06:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.661 10:06:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:40.661 10:06:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.661 Malloc0 00:05:40.661 10:06:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.919 Malloc1 00:05:40.919 10:06:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.919 10:06:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.919 10:06:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.919 10:06:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.919 10:06:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.919 10:06:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.919 10:06:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.919 10:06:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.919 10:06:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.919 10:06:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.919 10:06:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.919 10:06:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.919 10:06:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.919 10:06:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.919 10:06:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.919 10:06:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.178 /dev/nbd0 00:05:41.178 10:06:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.178 10:06:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.178 10:06:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:41.178 10:06:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:41.178 10:06:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:41.178 10:06:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:41.178 10:06:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:41.178 10:06:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:41.178 10:06:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:41.178 10:06:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:41.178 10:06:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.178 1+0 records in 00:05:41.178 1+0 records out 00:05:41.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253073 s, 16.2 MB/s 00:05:41.178 10:06:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.178 10:06:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:41.178 10:06:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.178 10:06:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:41.178 10:06:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:41.178 10:06:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.178 10:06:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.178 10:06:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.438 /dev/nbd1 00:05:41.438 10:06:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.438 10:06:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.438 10:06:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:41.438 10:06:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:41.438 10:06:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:41.438 10:06:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:41.438 10:06:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:41.438 10:06:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:41.438 10:06:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:41.438 10:06:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:41.438 10:06:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.438 1+0 records in 00:05:41.438 1+0 records out 00:05:41.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204005 s, 20.1 MB/s 00:05:41.438 10:06:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.438 10:06:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:41.438 10:06:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.438 10:06:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:41.438 10:06:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:41.438 10:06:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.438 10:06:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.438 10:06:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.438 10:06:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.438 10:06:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.697 { 00:05:41.697 "nbd_device": "/dev/nbd0", 00:05:41.697 "bdev_name": "Malloc0" 00:05:41.697 }, 00:05:41.697 { 00:05:41.697 "nbd_device": "/dev/nbd1", 00:05:41.697 "bdev_name": "Malloc1" 00:05:41.697 } 00:05:41.697 ]' 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.697 { 00:05:41.697 "nbd_device": "/dev/nbd0", 00:05:41.697 "bdev_name": "Malloc0" 00:05:41.697 }, 00:05:41.697 { 00:05:41.697 "nbd_device": "/dev/nbd1", 00:05:41.697 "bdev_name": "Malloc1" 00:05:41.697 } 00:05:41.697 ]' 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.697 /dev/nbd1' 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.697 /dev/nbd1' 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.697 256+0 records in 00:05:41.697 256+0 records out 00:05:41.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131187 s, 79.9 MB/s 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.697 256+0 records in 00:05:41.697 256+0 records out 00:05:41.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215581 s, 48.6 MB/s 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.697 256+0 records in 00:05:41.697 256+0 records out 00:05:41.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223225 s, 47.0 MB/s 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.697 10:06:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.956 10:06:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.956 10:06:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.956 10:06:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.956 10:06:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.956 10:06:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.956 10:06:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.956 10:06:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.956 10:06:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.956 10:06:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.956 10:06:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.956 10:06:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.956 10:06:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.956 10:06:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.956 10:06:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.956 10:06:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.956 10:06:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.956 10:06:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.956 10:06:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.956 10:06:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.956 10:06:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.213 10:06:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.213 10:06:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.213 10:06:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.213 10:06:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.213 10:06:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.213 10:06:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.213 10:06:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.213 10:06:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.213 10:06:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.213 10:06:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.213 10:06:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.471 10:06:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.471 10:06:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.471 10:06:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.471 10:06:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.471 10:06:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.471 10:06:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.471 10:06:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.471 10:06:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.471 10:06:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.471 10:06:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.471 10:06:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.471 10:06:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.471 10:06:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.728 10:06:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.986 [2024-07-25 10:06:16.093415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.986 [2024-07-25 10:06:16.183472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.986 [2024-07-25 10:06:16.183477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.986 [2024-07-25 10:06:16.226175] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.986 [2024-07-25 10:06:16.226228] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.308 10:06:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:46.308 spdk_app_start Round 2 00:05:46.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.308 10:06:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:46.308 10:06:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60290 /var/tmp/spdk-nbd.sock 00:05:46.308 10:06:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60290 ']' 00:05:46.308 10:06:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.308 10:06:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.308 10:06:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.308 10:06:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.308 10:06:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.308 10:06:19 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.308 10:06:19 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:46.308 10:06:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.308 Malloc0 00:05:46.308 10:06:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.308 Malloc1 00:05:46.565 10:06:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.565 /dev/nbd0 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.565 10:06:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:46.565 10:06:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:46.565 10:06:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:46.565 10:06:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:46.565 10:06:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:46.565 10:06:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:46.565 10:06:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:46.565 10:06:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:46.565 10:06:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.565 1+0 records in 00:05:46.565 1+0 records out 00:05:46.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346594 s, 11.8 MB/s 00:05:46.565 10:06:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.565 10:06:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:46.565 10:06:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.565 10:06:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:46.565 10:06:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.565 10:06:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.823 /dev/nbd1 00:05:46.823 10:06:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.823 10:06:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.823 10:06:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:46.823 10:06:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:46.823 10:06:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:46.823 10:06:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:46.823 10:06:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:46.823 10:06:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:46.823 10:06:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:46.823 10:06:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:46.823 10:06:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.823 1+0 records in 00:05:46.823 1+0 records out 00:05:46.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291476 s, 14.1 MB/s 00:05:46.823 10:06:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.082 10:06:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:47.082 10:06:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.082 10:06:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:47.082 10:06:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.082 { 00:05:47.082 "nbd_device": "/dev/nbd0", 00:05:47.082 "bdev_name": "Malloc0" 00:05:47.082 }, 00:05:47.082 { 00:05:47.082 "nbd_device": "/dev/nbd1", 00:05:47.082 "bdev_name": "Malloc1" 00:05:47.082 } 00:05:47.082 ]' 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.082 { 00:05:47.082 "nbd_device": "/dev/nbd0", 00:05:47.082 "bdev_name": "Malloc0" 00:05:47.082 }, 00:05:47.082 { 00:05:47.082 "nbd_device": "/dev/nbd1", 00:05:47.082 "bdev_name": "Malloc1" 00:05:47.082 } 00:05:47.082 ]' 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.082 /dev/nbd1' 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.082 /dev/nbd1' 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.082 256+0 records in 00:05:47.082 256+0 records out 00:05:47.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00519973 s, 202 MB/s 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.082 10:06:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.341 256+0 records in 00:05:47.341 256+0 records out 00:05:47.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207177 s, 50.6 MB/s 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.341 256+0 records in 00:05:47.341 256+0 records out 00:05:47.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239608 s, 43.8 MB/s 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.341 10:06:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.599 10:06:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.599 10:06:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.599 10:06:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.599 10:06:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.599 10:06:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.599 10:06:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.599 10:06:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.599 10:06:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.599 10:06:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.599 10:06:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.858 10:06:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.858 10:06:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.858 10:06:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.858 10:06:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.858 10:06:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.858 10:06:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.858 10:06:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.858 10:06:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.858 10:06:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.858 10:06:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.858 10:06:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.116 10:06:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.116 10:06:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.116 10:06:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.116 10:06:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.116 10:06:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.116 10:06:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.116 10:06:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:48.116 10:06:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.116 10:06:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.116 10:06:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.116 10:06:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.116 10:06:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.116 10:06:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.374 10:06:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.633 [2024-07-25 10:06:21.649822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.633 [2024-07-25 10:06:21.751065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.633 [2024-07-25 10:06:21.751071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.633 [2024-07-25 10:06:21.792671] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.633 [2024-07-25 10:06:21.792729] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.918 10:06:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60290 /var/tmp/spdk-nbd.sock 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60290 ']' 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:51.918 10:06:24 event.app_repeat -- event/event.sh@39 -- # killprocess 60290 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60290 ']' 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60290 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60290 00:05:51.918 killing process with pid 60290 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60290' 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60290 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60290 00:05:51.918 spdk_app_start is called in Round 0. 00:05:51.918 Shutdown signal received, stop current app iteration 00:05:51.918 Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 reinitialization... 00:05:51.918 spdk_app_start is called in Round 1. 00:05:51.918 Shutdown signal received, stop current app iteration 00:05:51.918 Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 reinitialization... 00:05:51.918 spdk_app_start is called in Round 2. 00:05:51.918 Shutdown signal received, stop current app iteration 00:05:51.918 Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 reinitialization... 00:05:51.918 spdk_app_start is called in Round 3. 00:05:51.918 Shutdown signal received, stop current app iteration 00:05:51.918 10:06:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:51.918 10:06:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:51.918 00:05:51.918 real 0m17.369s 00:05:51.918 user 0m38.316s 00:05:51.918 sys 0m2.827s 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.918 ************************************ 00:05:51.918 END TEST app_repeat 00:05:51.918 ************************************ 00:05:51.918 10:06:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.918 10:06:25 event -- common/autotest_common.sh@1142 -- # return 0 00:05:51.918 10:06:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:51.918 10:06:25 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:51.918 10:06:25 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.918 10:06:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.918 10:06:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.918 ************************************ 00:05:51.918 START TEST cpu_locks 00:05:51.918 ************************************ 00:05:51.918 10:06:25 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:51.918 * Looking for test storage... 00:05:51.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:51.918 10:06:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:51.918 10:06:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:51.918 10:06:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:51.918 10:06:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:51.918 10:06:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.918 10:06:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.918 10:06:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.918 ************************************ 00:05:51.918 START TEST default_locks 00:05:51.918 ************************************ 00:05:51.918 10:06:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:51.918 10:06:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60704 00:05:51.918 10:06:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.918 10:06:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60704 00:05:51.918 10:06:25 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60704 ']' 00:05:51.918 10:06:25 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.918 10:06:25 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.918 10:06:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.918 10:06:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.918 10:06:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.176 [2024-07-25 10:06:25.209031] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:52.176 [2024-07-25 10:06:25.209933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60704 ] 00:05:52.176 [2024-07-25 10:06:25.350307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.435 [2024-07-25 10:06:25.448510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.001 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.001 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:53.001 10:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60704 00:05:53.001 10:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60704 00:05:53.001 10:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.568 10:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60704 00:05:53.568 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60704 ']' 00:05:53.568 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60704 00:05:53.568 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:53.568 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.568 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60704 00:05:53.568 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.568 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.568 killing process with pid 60704 00:05:53.568 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60704' 00:05:53.568 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60704 00:05:53.568 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60704 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60704 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60704 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60704 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60704 ']' 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.827 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60704) - No such process 00:05:53.827 ERROR: process (pid: 60704) is no longer running 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.827 00:05:53.827 real 0m1.866s 00:05:53.827 user 0m2.015s 00:05:53.827 sys 0m0.600s 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.827 10:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.827 ************************************ 00:05:53.827 END TEST default_locks 00:05:53.827 ************************************ 00:05:53.827 10:06:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:53.827 10:06:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:53.827 10:06:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.827 10:06:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.827 10:06:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.827 ************************************ 00:05:53.827 START TEST default_locks_via_rpc 00:05:53.827 ************************************ 00:05:53.827 10:06:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:53.827 10:06:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60756 00:05:53.827 10:06:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60756 00:05:53.827 10:06:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60756 ']' 00:05:53.827 10:06:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.827 10:06:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.827 10:06:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.827 10:06:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.827 10:06:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.827 10:06:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.086 [2024-07-25 10:06:27.112300] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:54.086 [2024-07-25 10:06:27.112372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60756 ] 00:05:54.086 [2024-07-25 10:06:27.246942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.370 [2024-07-25 10:06:27.344823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60756 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60756 00:05:54.948 10:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.515 10:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60756 00:05:55.515 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60756 ']' 00:05:55.515 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60756 00:05:55.515 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:55.515 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.515 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60756 00:05:55.515 killing process with pid 60756 00:05:55.515 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.515 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.515 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60756' 00:05:55.515 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60756 00:05:55.515 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60756 00:05:55.773 00:05:55.773 real 0m1.916s 00:05:55.773 user 0m2.082s 00:05:55.773 sys 0m0.599s 00:05:55.773 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.773 10:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.773 ************************************ 00:05:55.773 END TEST default_locks_via_rpc 00:05:55.773 ************************************ 00:05:55.773 10:06:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:55.773 10:06:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:55.773 10:06:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.773 10:06:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.773 10:06:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.773 ************************************ 00:05:55.773 START TEST non_locking_app_on_locked_coremask 00:05:55.773 ************************************ 00:05:55.773 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:55.773 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60807 00:05:55.773 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60807 /var/tmp/spdk.sock 00:05:55.773 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60807 ']' 00:05:55.773 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.773 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.773 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.032 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.032 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.032 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.032 [2024-07-25 10:06:29.090853] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:56.032 [2024-07-25 10:06:29.090936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60807 ] 00:05:56.032 [2024-07-25 10:06:29.223638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.290 [2024-07-25 10:06:29.313338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.857 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.857 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:56.857 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60823 00:05:56.857 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60823 /var/tmp/spdk2.sock 00:05:56.857 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:56.857 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60823 ']' 00:05:56.857 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.857 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.857 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.857 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.857 10:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.857 [2024-07-25 10:06:30.041956] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:56.857 [2024-07-25 10:06:30.042324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60823 ] 00:05:57.115 [2024-07-25 10:06:30.185757] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.115 [2024-07-25 10:06:30.185793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.115 [2024-07-25 10:06:30.360179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.046 10:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.046 10:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:58.046 10:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60807 00:05:58.046 10:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60807 00:05:58.046 10:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.667 10:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60807 00:05:58.667 10:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60807 ']' 00:05:58.667 10:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60807 00:05:58.667 10:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:58.667 10:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.667 10:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60807 00:05:58.667 10:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.667 10:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.667 killing process with pid 60807 00:05:58.667 10:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60807' 00:05:58.667 10:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60807 00:05:58.667 10:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60807 00:05:59.233 10:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60823 00:05:59.233 10:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60823 ']' 00:05:59.233 10:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60823 00:05:59.233 10:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:59.233 10:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.233 10:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60823 00:05:59.233 10:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.233 10:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.233 killing process with pid 60823 00:05:59.233 10:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60823' 00:05:59.233 10:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60823 00:05:59.234 10:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60823 00:05:59.491 00:05:59.491 real 0m3.697s 00:05:59.491 user 0m4.116s 00:05:59.491 sys 0m1.033s 00:05:59.491 10:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.491 10:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.491 ************************************ 00:05:59.491 END TEST non_locking_app_on_locked_coremask 00:05:59.491 ************************************ 00:05:59.749 10:06:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:59.749 10:06:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:59.749 10:06:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.749 10:06:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.749 10:06:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.749 ************************************ 00:05:59.749 START TEST locking_app_on_unlocked_coremask 00:05:59.749 ************************************ 00:05:59.749 10:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:59.749 10:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60879 00:05:59.749 10:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60879 /var/tmp/spdk.sock 00:05:59.749 10:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60879 ']' 00:05:59.749 10:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:59.749 10:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.749 10:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.749 10:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.749 10:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.749 10:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.749 [2024-07-25 10:06:32.874096] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:05:59.749 [2024-07-25 10:06:32.874189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60879 ] 00:06:00.007 [2024-07-25 10:06:33.015217] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.007 [2024-07-25 10:06:33.015253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.007 [2024-07-25 10:06:33.096713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.582 10:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.582 10:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:00.582 10:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60895 00:06:00.582 10:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60895 /var/tmp/spdk2.sock 00:06:00.582 10:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.582 10:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60895 ']' 00:06:00.582 10:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.582 10:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.582 10:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.582 10:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.582 10:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.841 [2024-07-25 10:06:33.862645] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:00.841 [2024-07-25 10:06:33.862994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60895 ] 00:06:00.841 [2024-07-25 10:06:34.007285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.102 [2024-07-25 10:06:34.190419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.674 10:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.674 10:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:01.674 10:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60895 00:06:01.674 10:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.674 10:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60895 00:06:02.622 10:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60879 00:06:02.622 10:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60879 ']' 00:06:02.622 10:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60879 00:06:02.622 10:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:02.622 10:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.622 10:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60879 00:06:02.622 killing process with pid 60879 00:06:02.622 10:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.622 10:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.622 10:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60879' 00:06:02.622 10:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60879 00:06:02.622 10:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60879 00:06:03.189 10:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60895 00:06:03.189 10:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60895 ']' 00:06:03.189 10:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60895 00:06:03.189 10:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:03.189 10:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.189 10:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60895 00:06:03.448 killing process with pid 60895 00:06:03.448 10:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.448 10:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.448 10:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60895' 00:06:03.448 10:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60895 00:06:03.448 10:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60895 00:06:03.706 00:06:03.706 real 0m3.995s 00:06:03.706 user 0m4.493s 00:06:03.706 sys 0m1.184s 00:06:03.706 10:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.706 ************************************ 00:06:03.706 10:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.706 END TEST locking_app_on_unlocked_coremask 00:06:03.706 ************************************ 00:06:03.706 10:06:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:03.706 10:06:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:03.706 10:06:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.706 10:06:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.706 10:06:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.706 ************************************ 00:06:03.706 START TEST locking_app_on_locked_coremask 00:06:03.706 ************************************ 00:06:03.706 10:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:03.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.706 10:06:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60962 00:06:03.706 10:06:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60962 /var/tmp/spdk.sock 00:06:03.706 10:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60962 ']' 00:06:03.706 10:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.706 10:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.706 10:06:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.706 10:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.706 10:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.706 10:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.706 [2024-07-25 10:06:36.928109] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:03.706 [2024-07-25 10:06:36.928200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60962 ] 00:06:03.964 [2024-07-25 10:06:37.070749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.964 [2024-07-25 10:06:37.163315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.898 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.898 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:04.898 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:04.898 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60978 00:06:04.898 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60978 /var/tmp/spdk2.sock 00:06:04.898 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:04.898 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60978 /var/tmp/spdk2.sock 00:06:04.898 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:04.898 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.899 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:04.899 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.899 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60978 /var/tmp/spdk2.sock 00:06:04.899 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60978 ']' 00:06:04.899 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.899 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.899 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.899 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.899 10:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.899 [2024-07-25 10:06:37.891399] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:04.899 [2024-07-25 10:06:37.891653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60978 ] 00:06:04.899 [2024-07-25 10:06:38.028006] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60962 has claimed it. 00:06:04.899 [2024-07-25 10:06:38.028077] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.467 ERROR: process (pid: 60978) is no longer running 00:06:05.467 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60978) - No such process 00:06:05.467 10:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.467 10:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:05.467 10:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:05.467 10:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:05.467 10:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:05.467 10:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:05.467 10:06:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60962 00:06:05.467 10:06:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60962 00:06:05.467 10:06:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.726 10:06:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60962 00:06:05.726 10:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60962 ']' 00:06:05.986 10:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60962 00:06:05.986 10:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:05.986 10:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.986 10:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60962 00:06:05.986 killing process with pid 60962 00:06:05.986 10:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.986 10:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.986 10:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60962' 00:06:05.986 10:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60962 00:06:05.986 10:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60962 00:06:06.244 00:06:06.245 real 0m2.487s 00:06:06.245 user 0m2.871s 00:06:06.245 sys 0m0.579s 00:06:06.245 10:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.245 10:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.245 ************************************ 00:06:06.245 END TEST locking_app_on_locked_coremask 00:06:06.245 ************************************ 00:06:06.245 10:06:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:06.245 10:06:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:06.245 10:06:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.245 10:06:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.245 10:06:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.245 ************************************ 00:06:06.245 START TEST locking_overlapped_coremask 00:06:06.245 ************************************ 00:06:06.245 10:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:06.245 10:06:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61029 00:06:06.245 10:06:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:06.245 10:06:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61029 /var/tmp/spdk.sock 00:06:06.245 10:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61029 ']' 00:06:06.245 10:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.245 10:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.245 10:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.245 10:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.245 10:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.245 [2024-07-25 10:06:39.452049] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:06.245 [2024-07-25 10:06:39.452144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61029 ] 00:06:06.503 [2024-07-25 10:06:39.588687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.503 [2024-07-25 10:06:39.689700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.503 [2024-07-25 10:06:39.689872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.503 [2024-07-25 10:06:39.689873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.440 10:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61047 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61047 /var/tmp/spdk2.sock 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61047 /var/tmp/spdk2.sock 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 61047 /var/tmp/spdk2.sock 00:06:07.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61047 ']' 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.441 10:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.441 [2024-07-25 10:06:40.427170] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:07.441 [2024-07-25 10:06:40.427241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61047 ] 00:06:07.441 [2024-07-25 10:06:40.561523] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61029 has claimed it. 00:06:07.441 [2024-07-25 10:06:40.561584] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:08.007 ERROR: process (pid: 61047) is no longer running 00:06:08.007 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61047) - No such process 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61029 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 61029 ']' 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 61029 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61029 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61029' 00:06:08.007 killing process with pid 61029 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 61029 00:06:08.007 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 61029 00:06:08.265 00:06:08.265 real 0m2.106s 00:06:08.265 user 0m5.808s 00:06:08.265 sys 0m0.394s 00:06:08.265 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.265 ************************************ 00:06:08.265 END TEST locking_overlapped_coremask 00:06:08.265 ************************************ 00:06:08.265 10:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.524 10:06:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:08.524 10:06:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:08.524 10:06:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.524 10:06:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.524 10:06:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.524 ************************************ 00:06:08.524 START TEST locking_overlapped_coremask_via_rpc 00:06:08.524 ************************************ 00:06:08.524 10:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:08.524 10:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61087 00:06:08.524 10:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61087 /var/tmp/spdk.sock 00:06:08.524 10:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61087 ']' 00:06:08.524 10:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:08.524 10:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.524 10:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.524 10:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.524 10:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.524 10:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.524 [2024-07-25 10:06:41.603924] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:08.524 [2024-07-25 10:06:41.604001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61087 ] 00:06:08.524 [2024-07-25 10:06:41.738818] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.524 [2024-07-25 10:06:41.738856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.783 [2024-07-25 10:06:41.823443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.783 [2024-07-25 10:06:41.823544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.783 [2024-07-25 10:06:41.823548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.350 10:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.350 10:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:09.350 10:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:09.350 10:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61105 00:06:09.350 10:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61105 /var/tmp/spdk2.sock 00:06:09.350 10:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61105 ']' 00:06:09.350 10:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.350 10:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.350 10:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.350 10:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.350 10:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.350 [2024-07-25 10:06:42.527933] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:09.350 [2024-07-25 10:06:42.528252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61105 ] 00:06:09.609 [2024-07-25 10:06:42.663665] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.609 [2024-07-25 10:06:42.663714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.868 [2024-07-25 10:06:42.867692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.868 [2024-07-25 10:06:42.871566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.868 [2024-07-25 10:06:42.871570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.434 [2024-07-25 10:06:43.495517] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61087 has claimed it. 00:06:10.434 request: 00:06:10.434 { 00:06:10.434 "method": "framework_enable_cpumask_locks", 00:06:10.434 "req_id": 1 00:06:10.434 } 00:06:10.434 Got JSON-RPC error response 00:06:10.434 response: 00:06:10.434 { 00:06:10.434 "code": -32603, 00:06:10.434 "message": "Failed to claim CPU core: 2" 00:06:10.434 } 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61087 /var/tmp/spdk.sock 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61087 ']' 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.434 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.692 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.692 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:10.692 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61105 /var/tmp/spdk2.sock 00:06:10.692 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61105 ']' 00:06:10.692 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.692 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.692 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.692 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.692 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.951 ************************************ 00:06:10.951 END TEST locking_overlapped_coremask_via_rpc 00:06:10.951 ************************************ 00:06:10.951 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.951 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:10.951 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:10.951 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.951 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.951 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.951 00:06:10.951 real 0m2.415s 00:06:10.951 user 0m1.155s 00:06:10.951 sys 0m0.195s 00:06:10.951 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.951 10:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.951 10:06:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:10.951 10:06:43 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:10.951 10:06:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61087 ]] 00:06:10.951 10:06:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61087 00:06:10.951 10:06:43 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61087 ']' 00:06:10.951 10:06:43 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61087 00:06:10.951 10:06:43 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:10.951 10:06:43 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.951 10:06:43 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61087 00:06:10.951 10:06:44 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.951 10:06:44 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.951 10:06:44 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61087' 00:06:10.951 killing process with pid 61087 00:06:10.951 10:06:44 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61087 00:06:10.951 10:06:44 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61087 00:06:11.209 10:06:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61105 ]] 00:06:11.209 10:06:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61105 00:06:11.209 10:06:44 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61105 ']' 00:06:11.209 10:06:44 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61105 00:06:11.209 10:06:44 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:11.209 10:06:44 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.209 10:06:44 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61105 00:06:11.209 killing process with pid 61105 00:06:11.209 10:06:44 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:11.209 10:06:44 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:11.209 10:06:44 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61105' 00:06:11.209 10:06:44 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61105 00:06:11.209 10:06:44 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61105 00:06:11.468 10:06:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:11.468 10:06:44 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:11.468 Process with pid 61087 is not found 00:06:11.468 Process with pid 61105 is not found 00:06:11.468 10:06:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61087 ]] 00:06:11.468 10:06:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61087 00:06:11.468 10:06:44 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61087 ']' 00:06:11.468 10:06:44 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61087 00:06:11.468 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61087) - No such process 00:06:11.468 10:06:44 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61087 is not found' 00:06:11.468 10:06:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61105 ]] 00:06:11.468 10:06:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61105 00:06:11.468 10:06:44 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61105 ']' 00:06:11.468 10:06:44 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61105 00:06:11.468 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61105) - No such process 00:06:11.468 10:06:44 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61105 is not found' 00:06:11.468 10:06:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:11.468 00:06:11.468 real 0m19.693s 00:06:11.468 user 0m33.758s 00:06:11.468 sys 0m5.372s 00:06:11.468 10:06:44 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.468 10:06:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.468 ************************************ 00:06:11.468 END TEST cpu_locks 00:06:11.468 ************************************ 00:06:11.785 10:06:44 event -- common/autotest_common.sh@1142 -- # return 0 00:06:11.785 00:06:11.785 real 0m46.028s 00:06:11.785 user 1m27.159s 00:06:11.785 sys 0m9.029s 00:06:11.785 10:06:44 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.785 10:06:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.785 ************************************ 00:06:11.785 END TEST event 00:06:11.785 ************************************ 00:06:11.785 10:06:44 -- common/autotest_common.sh@1142 -- # return 0 00:06:11.785 10:06:44 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:11.785 10:06:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.785 10:06:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.785 10:06:44 -- common/autotest_common.sh@10 -- # set +x 00:06:11.785 ************************************ 00:06:11.785 START TEST thread 00:06:11.785 ************************************ 00:06:11.785 10:06:44 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:11.785 * Looking for test storage... 00:06:11.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:11.785 10:06:44 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:11.785 10:06:44 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:11.785 10:06:44 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.785 10:06:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.785 ************************************ 00:06:11.785 START TEST thread_poller_perf 00:06:11.785 ************************************ 00:06:11.785 10:06:44 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:11.785 [2024-07-25 10:06:44.951129] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:11.785 [2024-07-25 10:06:44.951208] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61222 ] 00:06:12.044 [2024-07-25 10:06:45.087051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.044 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:12.044 [2024-07-25 10:06:45.176024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.421 ====================================== 00:06:13.421 busy:2105823232 (cyc) 00:06:13.421 total_run_count: 393000 00:06:13.421 tsc_hz: 2100000000 (cyc) 00:06:13.421 ====================================== 00:06:13.421 poller_cost: 5358 (cyc), 2551 (nsec) 00:06:13.421 00:06:13.421 real 0m1.323s 00:06:13.421 user 0m1.168s 00:06:13.421 sys 0m0.049s 00:06:13.421 10:06:46 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.421 ************************************ 00:06:13.421 10:06:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.421 END TEST thread_poller_perf 00:06:13.421 ************************************ 00:06:13.421 10:06:46 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:13.422 10:06:46 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:13.422 10:06:46 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:13.422 10:06:46 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.422 10:06:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.422 ************************************ 00:06:13.422 START TEST thread_poller_perf 00:06:13.422 ************************************ 00:06:13.422 10:06:46 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:13.422 [2024-07-25 10:06:46.332806] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:13.422 [2024-07-25 10:06:46.332902] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61258 ] 00:06:13.422 [2024-07-25 10:06:46.475405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.422 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:13.422 [2024-07-25 10:06:46.557951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.798 ====================================== 00:06:14.798 busy:2101785612 (cyc) 00:06:14.798 total_run_count: 5227000 00:06:14.798 tsc_hz: 2100000000 (cyc) 00:06:14.798 ====================================== 00:06:14.798 poller_cost: 402 (cyc), 191 (nsec) 00:06:14.798 00:06:14.798 real 0m1.319s 00:06:14.798 user 0m1.165s 00:06:14.798 sys 0m0.049s 00:06:14.798 10:06:47 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.799 ************************************ 00:06:14.799 END TEST thread_poller_perf 00:06:14.799 10:06:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.799 ************************************ 00:06:14.799 10:06:47 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:14.799 10:06:47 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:14.799 00:06:14.799 real 0m2.855s 00:06:14.799 user 0m2.410s 00:06:14.799 sys 0m0.234s 00:06:14.799 10:06:47 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.799 10:06:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.799 ************************************ 00:06:14.799 END TEST thread 00:06:14.799 ************************************ 00:06:14.799 10:06:47 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.799 10:06:47 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:14.799 10:06:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.799 10:06:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.799 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:14.799 ************************************ 00:06:14.799 START TEST accel 00:06:14.799 ************************************ 00:06:14.799 10:06:47 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:14.799 * Looking for test storage... 00:06:14.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:14.799 10:06:47 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:14.799 10:06:47 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:14.799 10:06:47 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:14.799 10:06:47 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61332 00:06:14.799 10:06:47 accel -- accel/accel.sh@63 -- # waitforlisten 61332 00:06:14.799 10:06:47 accel -- common/autotest_common.sh@829 -- # '[' -z 61332 ']' 00:06:14.799 10:06:47 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.799 10:06:47 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.799 10:06:47 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.799 10:06:47 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:14.799 10:06:47 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.799 10:06:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.799 10:06:47 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:14.799 10:06:47 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.799 10:06:47 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.799 10:06:47 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.799 10:06:47 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.799 10:06:47 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.799 10:06:47 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:14.799 10:06:47 accel -- accel/accel.sh@41 -- # jq -r . 00:06:14.799 [2024-07-25 10:06:47.904723] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:14.799 [2024-07-25 10:06:47.904817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61332 ] 00:06:14.799 [2024-07-25 10:06:48.047494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.057 [2024-07-25 10:06:48.135609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.624 10:06:48 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.624 10:06:48 accel -- common/autotest_common.sh@862 -- # return 0 00:06:15.624 10:06:48 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:15.624 10:06:48 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:15.624 10:06:48 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:15.624 10:06:48 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:15.624 10:06:48 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:15.624 10:06:48 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:15.624 10:06:48 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:15.624 10:06:48 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.624 10:06:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.883 10:06:48 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.883 10:06:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:15.883 10:06:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:15.883 10:06:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:15.883 10:06:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:15.883 10:06:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:15.883 10:06:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:15.883 10:06:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:15.883 10:06:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:15.883 10:06:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:15.883 10:06:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:15.883 10:06:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:15.883 10:06:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:15.883 10:06:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:15.883 10:06:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:15.883 10:06:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:15.883 10:06:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:15.883 10:06:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:15.883 10:06:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:15.883 10:06:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:15.883 10:06:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:15.883 10:06:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:15.883 10:06:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:15.883 10:06:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:15.883 10:06:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:15.883 10:06:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:15.883 10:06:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:15.883 10:06:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:15.883 10:06:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:15.883 10:06:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:15.883 10:06:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:15.883 10:06:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:15.883 10:06:48 accel -- accel/accel.sh@75 -- # killprocess 61332 00:06:15.883 10:06:48 accel -- common/autotest_common.sh@948 -- # '[' -z 61332 ']' 00:06:15.883 10:06:48 accel -- common/autotest_common.sh@952 -- # kill -0 61332 00:06:15.883 10:06:48 accel -- common/autotest_common.sh@953 -- # uname 00:06:15.884 10:06:48 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.884 10:06:48 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61332 00:06:15.884 10:06:48 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.884 10:06:48 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.884 killing process with pid 61332 00:06:15.884 10:06:48 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61332' 00:06:15.884 10:06:48 accel -- common/autotest_common.sh@967 -- # kill 61332 00:06:15.884 10:06:48 accel -- common/autotest_common.sh@972 -- # wait 61332 00:06:16.143 10:06:49 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:16.143 10:06:49 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:16.143 10:06:49 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:16.143 10:06:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.143 10:06:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.143 10:06:49 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:16.143 10:06:49 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:16.143 10:06:49 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:16.143 10:06:49 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.143 10:06:49 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.143 10:06:49 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.143 10:06:49 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.143 10:06:49 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.143 10:06:49 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:16.143 10:06:49 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:16.143 10:06:49 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.143 10:06:49 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:16.143 10:06:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.143 10:06:49 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:16.143 10:06:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:16.143 10:06:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.143 10:06:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.143 ************************************ 00:06:16.143 START TEST accel_missing_filename 00:06:16.143 ************************************ 00:06:16.143 10:06:49 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:16.143 10:06:49 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:16.143 10:06:49 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:16.143 10:06:49 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:16.143 10:06:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.143 10:06:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:16.143 10:06:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.143 10:06:49 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:16.143 10:06:49 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:16.143 10:06:49 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:16.143 10:06:49 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.143 10:06:49 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.143 10:06:49 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.143 10:06:49 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.143 10:06:49 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.143 10:06:49 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:16.143 10:06:49 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:16.143 [2024-07-25 10:06:49.366374] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:16.143 [2024-07-25 10:06:49.366573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61385 ] 00:06:16.401 [2024-07-25 10:06:49.507036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.401 [2024-07-25 10:06:49.587092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.401 [2024-07-25 10:06:49.629029] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.659 [2024-07-25 10:06:49.688570] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:16.659 A filename is required. 00:06:16.659 ************************************ 00:06:16.659 END TEST accel_missing_filename 00:06:16.659 ************************************ 00:06:16.659 10:06:49 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:16.659 10:06:49 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:16.659 10:06:49 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:16.659 10:06:49 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:16.659 10:06:49 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:16.659 10:06:49 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:16.659 00:06:16.659 real 0m0.421s 00:06:16.659 user 0m0.263s 00:06:16.659 sys 0m0.096s 00:06:16.659 10:06:49 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.659 10:06:49 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:16.659 10:06:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.659 10:06:49 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:16.659 10:06:49 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:16.659 10:06:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.659 10:06:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.659 ************************************ 00:06:16.659 START TEST accel_compress_verify 00:06:16.659 ************************************ 00:06:16.659 10:06:49 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:16.659 10:06:49 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:16.659 10:06:49 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:16.659 10:06:49 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:16.659 10:06:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.659 10:06:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:16.659 10:06:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.659 10:06:49 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:16.659 10:06:49 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:16.659 10:06:49 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:16.659 10:06:49 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.659 10:06:49 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.659 10:06:49 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.659 10:06:49 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.659 10:06:49 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.659 10:06:49 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:16.659 10:06:49 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:16.659 [2024-07-25 10:06:49.842630] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:16.659 [2024-07-25 10:06:49.842721] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61409 ] 00:06:16.916 [2024-07-25 10:06:49.982176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.916 [2024-07-25 10:06:50.072907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.916 [2024-07-25 10:06:50.115075] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.173 [2024-07-25 10:06:50.174650] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:17.173 00:06:17.173 Compression does not support the verify option, aborting. 00:06:17.173 10:06:50 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:17.173 10:06:50 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.173 10:06:50 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:17.173 ************************************ 00:06:17.173 END TEST accel_compress_verify 00:06:17.173 ************************************ 00:06:17.173 10:06:50 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:17.173 10:06:50 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:17.173 10:06:50 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.173 00:06:17.173 real 0m0.438s 00:06:17.173 user 0m0.277s 00:06:17.173 sys 0m0.098s 00:06:17.173 10:06:50 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.173 10:06:50 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:17.173 10:06:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.173 10:06:50 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:17.173 10:06:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:17.173 10:06:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.173 10:06:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.173 ************************************ 00:06:17.173 START TEST accel_wrong_workload 00:06:17.173 ************************************ 00:06:17.173 10:06:50 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:17.173 10:06:50 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:17.173 10:06:50 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:17.173 10:06:50 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:17.173 10:06:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.173 10:06:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:17.173 10:06:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.173 10:06:50 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:17.174 10:06:50 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:17.174 10:06:50 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:17.174 10:06:50 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.174 10:06:50 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.174 10:06:50 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.174 10:06:50 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.174 10:06:50 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.174 10:06:50 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:17.174 10:06:50 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:17.174 Unsupported workload type: foobar 00:06:17.174 [2024-07-25 10:06:50.329327] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:17.174 accel_perf options: 00:06:17.174 [-h help message] 00:06:17.174 [-q queue depth per core] 00:06:17.174 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:17.174 [-T number of threads per core 00:06:17.174 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:17.174 [-t time in seconds] 00:06:17.174 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:17.174 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:17.174 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:17.174 [-l for compress/decompress workloads, name of uncompressed input file 00:06:17.174 [-S for crc32c workload, use this seed value (default 0) 00:06:17.174 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:17.174 [-f for fill workload, use this BYTE value (default 255) 00:06:17.174 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:17.174 [-y verify result if this switch is on] 00:06:17.174 [-a tasks to allocate per core (default: same value as -q)] 00:06:17.174 Can be used to spread operations across a wider range of memory. 00:06:17.174 10:06:50 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:17.174 10:06:50 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.174 10:06:50 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:17.174 10:06:50 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.174 00:06:17.174 real 0m0.035s 00:06:17.174 user 0m0.020s 00:06:17.174 sys 0m0.015s 00:06:17.174 10:06:50 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.174 10:06:50 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:17.174 ************************************ 00:06:17.174 END TEST accel_wrong_workload 00:06:17.174 ************************************ 00:06:17.174 10:06:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.174 10:06:50 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:17.174 10:06:50 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:17.174 10:06:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.174 10:06:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.174 ************************************ 00:06:17.174 START TEST accel_negative_buffers 00:06:17.174 ************************************ 00:06:17.174 10:06:50 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:17.174 10:06:50 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:17.174 10:06:50 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:17.174 10:06:50 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:17.174 10:06:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.174 10:06:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:17.174 10:06:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.174 10:06:50 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:17.174 10:06:50 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:17.174 10:06:50 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:17.174 10:06:50 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.174 10:06:50 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.174 10:06:50 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.174 10:06:50 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.174 10:06:50 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.174 10:06:50 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:17.174 10:06:50 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:17.174 -x option must be non-negative. 00:06:17.174 [2024-07-25 10:06:50.416210] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:17.174 accel_perf options: 00:06:17.174 [-h help message] 00:06:17.174 [-q queue depth per core] 00:06:17.174 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:17.174 [-T number of threads per core 00:06:17.174 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:17.174 [-t time in seconds] 00:06:17.174 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:17.174 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:17.174 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:17.174 [-l for compress/decompress workloads, name of uncompressed input file 00:06:17.174 [-S for crc32c workload, use this seed value (default 0) 00:06:17.174 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:17.174 [-f for fill workload, use this BYTE value (default 255) 00:06:17.174 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:17.174 [-y verify result if this switch is on] 00:06:17.174 [-a tasks to allocate per core (default: same value as -q)] 00:06:17.174 Can be used to spread operations across a wider range of memory. 00:06:17.174 10:06:50 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:17.174 10:06:50 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.174 10:06:50 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:17.174 10:06:50 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.174 00:06:17.174 real 0m0.031s 00:06:17.174 user 0m0.016s 00:06:17.174 sys 0m0.015s 00:06:17.174 10:06:50 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.174 ************************************ 00:06:17.174 END TEST accel_negative_buffers 00:06:17.174 ************************************ 00:06:17.174 10:06:50 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:17.431 10:06:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.431 10:06:50 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:17.431 10:06:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:17.431 10:06:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.431 10:06:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.431 ************************************ 00:06:17.431 START TEST accel_crc32c 00:06:17.431 ************************************ 00:06:17.431 10:06:50 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:17.431 10:06:50 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:17.431 10:06:50 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:17.431 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 10:06:50 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:17.431 10:06:50 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:17.431 10:06:50 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:17.431 10:06:50 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.431 10:06:50 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.431 10:06:50 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.431 10:06:50 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.431 10:06:50 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.431 10:06:50 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:17.431 10:06:50 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:17.431 [2024-07-25 10:06:50.505736] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:17.431 [2024-07-25 10:06:50.505831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61468 ] 00:06:17.431 [2024-07-25 10:06:50.647558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.689 [2024-07-25 10:06:50.737157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.689 10:06:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:19.063 10:06:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.063 00:06:19.063 real 0m1.440s 00:06:19.063 user 0m0.018s 00:06:19.063 sys 0m0.002s 00:06:19.063 10:06:51 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.063 10:06:51 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:19.063 ************************************ 00:06:19.063 END TEST accel_crc32c 00:06:19.063 ************************************ 00:06:19.063 10:06:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.063 10:06:51 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:19.063 10:06:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:19.063 10:06:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.064 10:06:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.064 ************************************ 00:06:19.064 START TEST accel_crc32c_C2 00:06:19.064 ************************************ 00:06:19.064 10:06:51 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:19.064 10:06:51 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.064 10:06:51 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:19.064 10:06:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:51 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:19.064 10:06:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:19.064 10:06:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.064 10:06:51 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.064 10:06:51 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.064 10:06:51 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.064 10:06:51 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.064 10:06:51 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.064 10:06:51 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.064 10:06:51 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:19.064 [2024-07-25 10:06:51.993612] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:19.064 [2024-07-25 10:06:51.994256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61502 ] 00:06:19.064 [2024-07-25 10:06:52.136131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.064 [2024-07-25 10:06:52.223333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 10:06:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.437 00:06:20.437 real 0m1.435s 00:06:20.437 user 0m1.242s 00:06:20.437 sys 0m0.103s 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.437 10:06:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:20.437 ************************************ 00:06:20.437 END TEST accel_crc32c_C2 00:06:20.437 ************************************ 00:06:20.437 10:06:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.437 10:06:53 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:20.437 10:06:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:20.437 10:06:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.437 10:06:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.437 ************************************ 00:06:20.437 START TEST accel_copy 00:06:20.437 ************************************ 00:06:20.437 10:06:53 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:20.437 10:06:53 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:20.437 10:06:53 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:20.437 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.437 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.437 10:06:53 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:20.437 10:06:53 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:20.437 10:06:53 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:20.437 10:06:53 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.437 10:06:53 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.437 10:06:53 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.437 10:06:53 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.437 10:06:53 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.437 10:06:53 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:20.437 10:06:53 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:20.437 [2024-07-25 10:06:53.485802] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:20.437 [2024-07-25 10:06:53.485865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61537 ] 00:06:20.437 [2024-07-25 10:06:53.618446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.696 [2024-07-25 10:06:53.713246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.696 10:06:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.634 10:06:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.892 10:06:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.892 10:06:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.892 10:06:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.892 10:06:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.892 10:06:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.892 10:06:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.892 10:06:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:21.892 10:06:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.892 00:06:21.892 real 0m1.432s 00:06:21.892 user 0m1.250s 00:06:21.892 sys 0m0.092s 00:06:21.892 10:06:54 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.892 10:06:54 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:21.892 ************************************ 00:06:21.892 END TEST accel_copy 00:06:21.892 ************************************ 00:06:21.892 10:06:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.892 10:06:54 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:21.892 10:06:54 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:21.892 10:06:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.892 10:06:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.892 ************************************ 00:06:21.892 START TEST accel_fill 00:06:21.892 ************************************ 00:06:21.892 10:06:54 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:21.892 10:06:54 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:21.892 10:06:54 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:21.892 10:06:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.892 10:06:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.892 10:06:54 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:21.892 10:06:54 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:21.892 10:06:54 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:21.892 10:06:54 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.892 10:06:54 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.893 10:06:54 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.893 10:06:54 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.893 10:06:54 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.893 10:06:54 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:21.893 10:06:54 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:21.893 [2024-07-25 10:06:54.983840] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:21.893 [2024-07-25 10:06:54.983926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61566 ] 00:06:21.893 [2024-07-25 10:06:55.127276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.152 [2024-07-25 10:06:55.206331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.152 10:06:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:23.529 10:06:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.529 00:06:23.529 real 0m1.435s 00:06:23.529 user 0m1.236s 00:06:23.529 sys 0m0.109s 00:06:23.529 10:06:56 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.529 10:06:56 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:23.529 ************************************ 00:06:23.529 END TEST accel_fill 00:06:23.529 ************************************ 00:06:23.529 10:06:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.529 10:06:56 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:23.529 10:06:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:23.529 10:06:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.529 10:06:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.529 ************************************ 00:06:23.529 START TEST accel_copy_crc32c 00:06:23.529 ************************************ 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:23.529 [2024-07-25 10:06:56.467802] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:23.529 [2024-07-25 10:06:56.467863] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61606 ] 00:06:23.529 [2024-07-25 10:06:56.596028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.529 [2024-07-25 10:06:56.689138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.529 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.530 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.530 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:23.530 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.530 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.530 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.530 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.530 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.530 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.530 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.530 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.530 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.530 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.530 10:06:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.906 00:06:24.906 real 0m1.428s 00:06:24.906 user 0m1.244s 00:06:24.906 sys 0m0.094s 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.906 ************************************ 00:06:24.906 END TEST accel_copy_crc32c 00:06:24.906 ************************************ 00:06:24.906 10:06:57 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:24.906 10:06:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.907 10:06:57 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:24.907 10:06:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:24.907 10:06:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.907 10:06:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.907 ************************************ 00:06:24.907 START TEST accel_copy_crc32c_C2 00:06:24.907 ************************************ 00:06:24.907 10:06:57 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:24.907 10:06:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.907 10:06:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:24.907 10:06:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.907 10:06:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.907 10:06:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:24.907 10:06:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:24.907 10:06:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.907 10:06:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.907 10:06:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.907 10:06:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.907 10:06:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.907 10:06:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.907 10:06:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:24.907 10:06:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:24.907 [2024-07-25 10:06:57.950855] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:24.907 [2024-07-25 10:06:57.950918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61637 ] 00:06:24.907 [2024-07-25 10:06:58.082058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.166 [2024-07-25 10:06:58.173122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.166 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.167 10:06:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.102 00:06:26.102 real 0m1.426s 00:06:26.102 user 0m1.248s 00:06:26.102 sys 0m0.092s 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.102 10:06:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:26.102 ************************************ 00:06:26.102 END TEST accel_copy_crc32c_C2 00:06:26.102 ************************************ 00:06:26.361 10:06:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.362 10:06:59 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:26.362 10:06:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:26.362 10:06:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.362 10:06:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.362 ************************************ 00:06:26.362 START TEST accel_dualcast 00:06:26.362 ************************************ 00:06:26.362 10:06:59 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:26.362 10:06:59 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:26.362 10:06:59 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:26.362 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.362 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.362 10:06:59 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:26.362 10:06:59 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:26.362 10:06:59 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:26.362 10:06:59 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.362 10:06:59 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.362 10:06:59 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.362 10:06:59 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.362 10:06:59 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.362 10:06:59 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:26.362 10:06:59 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:26.362 [2024-07-25 10:06:59.432540] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:26.362 [2024-07-25 10:06:59.432628] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61671 ] 00:06:26.362 [2024-07-25 10:06:59.575014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.620 [2024-07-25 10:06:59.672692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.620 10:06:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.621 10:06:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.996 ************************************ 00:06:27.996 END TEST accel_dualcast 00:06:27.996 ************************************ 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:27.996 10:07:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.996 00:06:27.996 real 0m1.447s 00:06:27.996 user 0m1.251s 00:06:27.996 sys 0m0.105s 00:06:27.996 10:07:00 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.996 10:07:00 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:27.996 10:07:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.996 10:07:00 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:27.996 10:07:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:27.996 10:07:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.996 10:07:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.996 ************************************ 00:06:27.996 START TEST accel_compare 00:06:27.996 ************************************ 00:06:27.996 10:07:00 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:27.996 10:07:00 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:27.996 10:07:00 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:27.996 10:07:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.996 10:07:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.996 10:07:00 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:27.996 10:07:00 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:27.996 10:07:00 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:27.996 10:07:00 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.996 10:07:00 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.996 10:07:00 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.996 10:07:00 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.996 10:07:00 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.996 10:07:00 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:27.996 10:07:00 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:27.996 [2024-07-25 10:07:00.935519] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:27.996 [2024-07-25 10:07:00.935702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61706 ] 00:06:27.996 [2024-07-25 10:07:01.067685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.996 [2024-07-25 10:07:01.150657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.996 10:07:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.997 10:07:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:29.371 10:07:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.372 ************************************ 00:06:29.372 END TEST accel_compare 00:06:29.372 ************************************ 00:06:29.372 00:06:29.372 real 0m1.426s 00:06:29.372 user 0m1.236s 00:06:29.372 sys 0m0.098s 00:06:29.372 10:07:02 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.372 10:07:02 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:29.372 10:07:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.372 10:07:02 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:29.372 10:07:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:29.372 10:07:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.372 10:07:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.372 ************************************ 00:06:29.372 START TEST accel_xor 00:06:29.372 ************************************ 00:06:29.372 10:07:02 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:29.372 10:07:02 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:29.372 10:07:02 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:29.372 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.372 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.372 10:07:02 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:29.372 10:07:02 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:29.372 10:07:02 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:29.372 10:07:02 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.372 10:07:02 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.372 10:07:02 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.372 10:07:02 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.372 10:07:02 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.372 10:07:02 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:29.372 10:07:02 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:29.372 [2024-07-25 10:07:02.416963] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:29.372 [2024-07-25 10:07:02.417053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61739 ] 00:06:29.372 [2024-07-25 10:07:02.560041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.631 [2024-07-25 10:07:02.649834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.631 10:07:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.006 10:07:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.006 10:07:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.006 10:07:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.006 10:07:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.006 10:07:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.006 10:07:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.006 10:07:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.006 10:07:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.006 10:07:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.006 10:07:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.007 00:06:31.007 real 0m1.442s 00:06:31.007 user 0m1.259s 00:06:31.007 sys 0m0.095s 00:06:31.007 10:07:03 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.007 ************************************ 00:06:31.007 END TEST accel_xor 00:06:31.007 ************************************ 00:06:31.007 10:07:03 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:31.007 10:07:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.007 10:07:03 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:31.007 10:07:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:31.007 10:07:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.007 10:07:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.007 ************************************ 00:06:31.007 START TEST accel_xor 00:06:31.007 ************************************ 00:06:31.007 10:07:03 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:31.007 10:07:03 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:31.007 [2024-07-25 10:07:03.914204] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:31.007 [2024-07-25 10:07:03.914304] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61775 ] 00:06:31.007 [2024-07-25 10:07:04.056083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.007 [2024-07-25 10:07:04.154131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.007 10:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:32.387 10:07:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.387 00:06:32.387 real 0m1.449s 00:06:32.387 user 0m1.258s 00:06:32.387 sys 0m0.102s 00:06:32.387 10:07:05 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.387 10:07:05 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:32.387 ************************************ 00:06:32.387 END TEST accel_xor 00:06:32.387 ************************************ 00:06:32.387 10:07:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.387 10:07:05 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:32.387 10:07:05 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:32.387 10:07:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.387 10:07:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.387 ************************************ 00:06:32.387 START TEST accel_dif_verify 00:06:32.387 ************************************ 00:06:32.387 10:07:05 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:32.387 10:07:05 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:32.387 10:07:05 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:32.387 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.387 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.387 10:07:05 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:32.387 10:07:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:32.387 10:07:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:32.387 10:07:05 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.387 10:07:05 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.387 10:07:05 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.387 10:07:05 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.387 10:07:05 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.387 10:07:05 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:32.387 10:07:05 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:32.387 [2024-07-25 10:07:05.429864] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:32.387 [2024-07-25 10:07:05.430000] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61804 ] 00:06:32.387 [2024-07-25 10:07:05.581653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.646 [2024-07-25 10:07:05.669162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.646 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.646 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.646 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.646 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.646 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.646 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.646 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.646 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.646 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:32.646 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.646 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.646 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.646 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.646 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.646 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.647 10:07:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.023 10:07:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.024 10:07:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.024 10:07:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.024 10:07:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:34.024 10:07:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.024 ************************************ 00:06:34.024 END TEST accel_dif_verify 00:06:34.024 ************************************ 00:06:34.024 00:06:34.024 real 0m1.456s 00:06:34.024 user 0m1.247s 00:06:34.024 sys 0m0.116s 00:06:34.024 10:07:06 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.024 10:07:06 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:34.024 10:07:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.024 10:07:06 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:34.024 10:07:06 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:34.024 10:07:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.024 10:07:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.024 ************************************ 00:06:34.024 START TEST accel_dif_generate 00:06:34.024 ************************************ 00:06:34.024 10:07:06 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:34.024 10:07:06 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:34.024 10:07:06 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:34.024 10:07:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:06 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:34.024 10:07:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:34.024 10:07:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:34.024 10:07:06 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.024 10:07:06 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.024 10:07:06 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.024 10:07:06 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.024 10:07:06 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.024 10:07:06 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:34.024 10:07:06 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:34.024 [2024-07-25 10:07:06.939398] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:34.024 [2024-07-25 10:07:06.939528] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61844 ] 00:06:34.024 [2024-07-25 10:07:07.090821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.024 [2024-07-25 10:07:07.167613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.024 10:07:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:35.396 ************************************ 00:06:35.396 END TEST accel_dif_generate 00:06:35.396 ************************************ 00:06:35.396 10:07:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.396 00:06:35.396 real 0m1.446s 00:06:35.396 user 0m1.250s 00:06:35.396 sys 0m0.105s 00:06:35.396 10:07:08 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.396 10:07:08 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:35.397 10:07:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.397 10:07:08 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:35.397 10:07:08 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:35.397 10:07:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.397 10:07:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.397 ************************************ 00:06:35.397 START TEST accel_dif_generate_copy 00:06:35.397 ************************************ 00:06:35.397 10:07:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:35.397 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:35.397 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:35.397 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.397 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.397 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:35.397 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:35.397 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:35.397 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.397 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.397 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.397 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.397 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.397 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:35.397 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:35.397 [2024-07-25 10:07:08.426140] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:35.397 [2024-07-25 10:07:08.426225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61873 ] 00:06:35.397 [2024-07-25 10:07:08.566798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.656 [2024-07-25 10:07:08.664635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.656 10:07:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.617 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.618 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.618 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.618 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:36.618 10:07:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.618 00:06:36.618 real 0m1.442s 00:06:36.618 user 0m1.251s 00:06:36.618 sys 0m0.100s 00:06:36.618 10:07:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.618 ************************************ 00:06:36.618 END TEST accel_dif_generate_copy 00:06:36.618 ************************************ 00:06:36.618 10:07:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:36.875 10:07:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.875 10:07:09 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:36.875 10:07:09 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:36.875 10:07:09 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:36.875 10:07:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.875 10:07:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.875 ************************************ 00:06:36.875 START TEST accel_comp 00:06:36.875 ************************************ 00:06:36.875 10:07:09 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:36.875 10:07:09 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:36.875 10:07:09 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:36.875 10:07:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.875 10:07:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.875 10:07:09 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:36.875 10:07:09 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:36.875 10:07:09 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:36.875 10:07:09 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.875 10:07:09 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.875 10:07:09 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.875 10:07:09 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.875 10:07:09 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.875 10:07:09 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:36.875 10:07:09 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:36.875 [2024-07-25 10:07:09.933084] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:36.875 [2024-07-25 10:07:09.933183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61914 ] 00:06:36.875 [2024-07-25 10:07:10.074755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.133 [2024-07-25 10:07:10.164011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.133 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.134 10:07:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:38.509 10:07:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.509 00:06:38.509 real 0m1.448s 00:06:38.509 user 0m1.249s 00:06:38.509 sys 0m0.108s 00:06:38.509 ************************************ 00:06:38.509 END TEST accel_comp 00:06:38.509 ************************************ 00:06:38.509 10:07:11 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.509 10:07:11 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:38.509 10:07:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.509 10:07:11 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:38.509 10:07:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:38.509 10:07:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.509 10:07:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.509 ************************************ 00:06:38.509 START TEST accel_decomp 00:06:38.509 ************************************ 00:06:38.509 10:07:11 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:38.509 [2024-07-25 10:07:11.436088] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:38.509 [2024-07-25 10:07:11.436175] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61943 ] 00:06:38.509 [2024-07-25 10:07:11.579124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.509 [2024-07-25 10:07:11.661845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:38.509 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.510 10:07:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.885 10:07:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.886 10:07:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:39.886 10:07:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.886 00:06:39.886 real 0m1.439s 00:06:39.886 user 0m1.255s 00:06:39.886 sys 0m0.096s 00:06:39.886 10:07:12 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.886 ************************************ 00:06:39.886 END TEST accel_decomp 00:06:39.886 ************************************ 00:06:39.886 10:07:12 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:39.886 10:07:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.886 10:07:12 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:39.886 10:07:12 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:39.886 10:07:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.886 10:07:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.886 ************************************ 00:06:39.886 START TEST accel_decomp_full 00:06:39.886 ************************************ 00:06:39.886 10:07:12 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:39.886 10:07:12 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:39.886 10:07:12 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:39.886 10:07:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.886 10:07:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.886 10:07:12 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:39.886 10:07:12 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:39.886 10:07:12 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:39.886 10:07:12 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.886 10:07:12 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.886 10:07:12 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.886 10:07:12 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.886 10:07:12 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.886 10:07:12 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:39.886 10:07:12 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:39.886 [2024-07-25 10:07:12.933146] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:39.886 [2024-07-25 10:07:12.933234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61985 ] 00:06:39.886 [2024-07-25 10:07:13.075048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.144 [2024-07-25 10:07:13.171487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.144 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.144 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.144 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.144 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.144 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.144 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.144 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.144 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.144 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.144 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.144 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.144 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.144 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:40.144 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.145 10:07:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.521 10:07:14 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.521 00:06:41.521 real 0m1.459s 00:06:41.521 user 0m1.274s 00:06:41.521 sys 0m0.094s 00:06:41.521 10:07:14 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.521 ************************************ 00:06:41.521 END TEST accel_decomp_full 00:06:41.521 ************************************ 00:06:41.521 10:07:14 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:41.521 10:07:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.521 10:07:14 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:41.521 10:07:14 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:41.521 10:07:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.521 10:07:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.521 ************************************ 00:06:41.521 START TEST accel_decomp_mcore 00:06:41.521 ************************************ 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:41.521 [2024-07-25 10:07:14.451976] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:41.521 [2024-07-25 10:07:14.452062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62014 ] 00:06:41.521 [2024-07-25 10:07:14.596118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.521 [2024-07-25 10:07:14.681467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.521 [2024-07-25 10:07:14.681588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.521 [2024-07-25 10:07:14.681756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.521 [2024-07-25 10:07:14.681757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.521 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.522 10:07:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.911 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.912 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.912 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.912 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.912 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.912 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.912 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.912 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.912 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:42.912 10:07:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.912 00:06:42.912 real 0m1.453s 00:06:42.912 user 0m4.539s 00:06:42.912 sys 0m0.121s 00:06:42.912 10:07:15 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.912 10:07:15 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:42.912 ************************************ 00:06:42.912 END TEST accel_decomp_mcore 00:06:42.912 ************************************ 00:06:42.912 10:07:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.912 10:07:15 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.912 10:07:15 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:42.912 10:07:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.912 10:07:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.912 ************************************ 00:06:42.912 START TEST accel_decomp_full_mcore 00:06:42.912 ************************************ 00:06:42.912 10:07:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.912 10:07:15 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:42.912 10:07:15 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:42.912 10:07:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.912 10:07:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.912 10:07:15 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.912 10:07:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.912 10:07:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:42.912 10:07:15 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.912 10:07:15 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.912 10:07:15 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.912 10:07:15 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.912 10:07:15 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.912 10:07:15 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:42.912 10:07:15 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:42.912 [2024-07-25 10:07:15.963445] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:42.912 [2024-07-25 10:07:15.963532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62052 ] 00:06:42.912 [2024-07-25 10:07:16.100917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.171 [2024-07-25 10:07:16.184942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.171 [2024-07-25 10:07:16.185108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.171 [2024-07-25 10:07:16.185214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.171 [2024-07-25 10:07:16.185340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.171 10:07:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.547 00:06:44.547 real 0m1.463s 00:06:44.547 user 0m4.597s 00:06:44.547 sys 0m0.122s 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.547 10:07:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:44.547 ************************************ 00:06:44.547 END TEST accel_decomp_full_mcore 00:06:44.547 ************************************ 00:06:44.547 10:07:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.547 10:07:17 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:44.547 10:07:17 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:44.547 10:07:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.547 10:07:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.547 ************************************ 00:06:44.547 START TEST accel_decomp_mthread 00:06:44.547 ************************************ 00:06:44.547 10:07:17 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:44.547 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:44.547 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:44.547 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.547 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.547 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:44.547 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:44.547 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:44.547 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.547 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.547 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.547 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.547 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.547 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:44.547 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:44.547 [2024-07-25 10:07:17.473897] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:44.548 [2024-07-25 10:07:17.473986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62089 ] 00:06:44.548 [2024-07-25 10:07:17.615659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.548 [2024-07-25 10:07:17.713198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.548 10:07:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.923 ************************************ 00:06:45.923 END TEST accel_decomp_mthread 00:06:45.923 ************************************ 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.923 00:06:45.923 real 0m1.455s 00:06:45.923 user 0m1.260s 00:06:45.923 sys 0m0.104s 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.923 10:07:18 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:45.923 10:07:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.923 10:07:18 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:45.923 10:07:18 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:45.923 10:07:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.923 10:07:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.923 ************************************ 00:06:45.923 START TEST accel_decomp_full_mthread 00:06:45.923 ************************************ 00:06:45.923 10:07:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:45.923 10:07:18 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:45.923 10:07:18 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:45.923 10:07:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.923 10:07:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.923 10:07:18 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:45.923 10:07:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:45.923 10:07:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:45.923 10:07:18 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.923 10:07:18 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.923 10:07:18 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.923 10:07:18 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.923 10:07:18 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.923 10:07:18 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:45.923 10:07:18 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:45.923 [2024-07-25 10:07:18.985497] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:45.923 [2024-07-25 10:07:18.985586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62124 ] 00:06:45.923 [2024-07-25 10:07:19.126706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.181 [2024-07-25 10:07:19.218159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.181 10:07:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.556 ************************************ 00:06:47.556 END TEST accel_decomp_full_mthread 00:06:47.556 ************************************ 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.556 00:06:47.556 real 0m1.475s 00:06:47.556 user 0m1.279s 00:06:47.556 sys 0m0.103s 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.556 10:07:20 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:47.556 10:07:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.556 10:07:20 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:47.556 10:07:20 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:47.556 10:07:20 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:47.556 10:07:20 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.556 10:07:20 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:47.556 10:07:20 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.556 10:07:20 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.556 10:07:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.556 10:07:20 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.556 10:07:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.556 10:07:20 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.556 10:07:20 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:47.556 10:07:20 accel -- accel/accel.sh@41 -- # jq -r . 00:06:47.556 ************************************ 00:06:47.556 START TEST accel_dif_functional_tests 00:06:47.556 ************************************ 00:06:47.556 10:07:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:47.556 [2024-07-25 10:07:20.575239] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:47.556 [2024-07-25 10:07:20.575617] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62159 ] 00:06:47.556 [2024-07-25 10:07:20.717063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.814 [2024-07-25 10:07:20.817490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.814 [2024-07-25 10:07:20.817675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.814 [2024-07-25 10:07:20.817677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.814 00:06:47.814 00:06:47.814 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.814 http://cunit.sourceforge.net/ 00:06:47.814 00:06:47.814 00:06:47.814 Suite: accel_dif 00:06:47.814 Test: verify: DIF generated, GUARD check ...passed 00:06:47.814 Test: verify: DIF generated, APPTAG check ...passed 00:06:47.814 Test: verify: DIF generated, REFTAG check ...passed 00:06:47.814 Test: verify: DIF not generated, GUARD check ...passed 00:06:47.814 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 10:07:20.888935] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:47.814 [2024-07-25 10:07:20.889021] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:47.814 passed 00:06:47.814 Test: verify: DIF not generated, REFTAG check ...passed 00:06:47.814 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:47.814 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-25 10:07:20.889137] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:47.814 passed 00:06:47.814 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-25 10:07:20.889210] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:47.814 passed 00:06:47.814 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:47.814 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:47.814 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed[2024-07-25 10:07:20.889590] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:47.814 00:06:47.814 Test: verify copy: DIF generated, GUARD check ...passed 00:06:47.814 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:47.814 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:47.814 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:47.814 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-25 10:07:20.889832] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:47.814 [2024-07-25 10:07:20.889917] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:47.814 passed 00:06:47.814 Test: verify copy: DIF not generated, REFTAG check ...passed 00:06:47.814 Test: generate copy: DIF generated, GUARD check ...[2024-07-25 10:07:20.889959] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:47.814 passed 00:06:47.814 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:47.814 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:47.814 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:47.814 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:47.814 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:47.814 Test: generate copy: iovecs-len validate ...passed 00:06:47.814 Test: generate copy: buffer alignment validate ...[2024-07-25 10:07:20.890364] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:47.814 passed 00:06:47.814 00:06:47.814 Run Summary: Type Total Ran Passed Failed Inactive 00:06:47.814 suites 1 1 n/a 0 0 00:06:47.814 tests 26 26 26 0 0 00:06:47.814 asserts 115 115 115 0 n/a 00:06:47.814 00:06:47.814 Elapsed time = 0.005 seconds 00:06:47.814 ************************************ 00:06:47.814 END TEST accel_dif_functional_tests 00:06:47.814 ************************************ 00:06:47.814 00:06:47.814 real 0m0.570s 00:06:47.814 user 0m0.682s 00:06:47.814 sys 0m0.133s 00:06:47.814 10:07:21 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.814 10:07:21 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:48.072 10:07:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.072 ************************************ 00:06:48.072 END TEST accel 00:06:48.072 ************************************ 00:06:48.072 00:06:48.072 real 0m33.388s 00:06:48.072 user 0m35.141s 00:06:48.072 sys 0m3.732s 00:06:48.072 10:07:21 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.072 10:07:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.072 10:07:21 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.072 10:07:21 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:48.072 10:07:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.072 10:07:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.072 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:48.072 ************************************ 00:06:48.072 START TEST accel_rpc 00:06:48.072 ************************************ 00:06:48.072 10:07:21 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:48.072 * Looking for test storage... 00:06:48.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:48.072 10:07:21 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:48.072 10:07:21 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62224 00:06:48.072 10:07:21 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62224 00:06:48.072 10:07:21 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62224 ']' 00:06:48.072 10:07:21 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.072 10:07:21 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.072 10:07:21 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:48.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.072 10:07:21 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.072 10:07:21 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.072 10:07:21 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.353 [2024-07-25 10:07:21.366386] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:48.353 [2024-07-25 10:07:21.366743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62224 ] 00:06:48.353 [2024-07-25 10:07:21.509474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.353 [2024-07-25 10:07:21.595951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.286 10:07:22 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.286 10:07:22 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:49.286 10:07:22 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:49.286 10:07:22 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:49.286 10:07:22 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:49.286 10:07:22 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:49.286 10:07:22 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:49.286 10:07:22 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.286 10:07:22 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.286 10:07:22 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.286 ************************************ 00:06:49.286 START TEST accel_assign_opcode 00:06:49.286 ************************************ 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:49.286 [2024-07-25 10:07:22.208394] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:49.286 [2024-07-25 10:07:22.220399] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.286 software 00:06:49.286 ************************************ 00:06:49.286 END TEST accel_assign_opcode 00:06:49.286 ************************************ 00:06:49.286 00:06:49.286 real 0m0.256s 00:06:49.286 user 0m0.049s 00:06:49.286 sys 0m0.016s 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.286 10:07:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:49.286 10:07:22 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:49.286 10:07:22 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62224 00:06:49.286 10:07:22 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62224 ']' 00:06:49.286 10:07:22 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62224 00:06:49.286 10:07:22 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:49.286 10:07:22 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.286 10:07:22 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62224 00:06:49.286 killing process with pid 62224 00:06:49.286 10:07:22 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.286 10:07:22 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.286 10:07:22 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62224' 00:06:49.286 10:07:22 accel_rpc -- common/autotest_common.sh@967 -- # kill 62224 00:06:49.286 10:07:22 accel_rpc -- common/autotest_common.sh@972 -- # wait 62224 00:06:49.852 00:06:49.852 real 0m1.681s 00:06:49.852 user 0m1.662s 00:06:49.852 sys 0m0.457s 00:06:49.852 10:07:22 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.852 ************************************ 00:06:49.852 END TEST accel_rpc 00:06:49.852 10:07:22 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.852 ************************************ 00:06:49.852 10:07:22 -- common/autotest_common.sh@1142 -- # return 0 00:06:49.852 10:07:22 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:49.852 10:07:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.852 10:07:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.852 10:07:22 -- common/autotest_common.sh@10 -- # set +x 00:06:49.852 ************************************ 00:06:49.852 START TEST app_cmdline 00:06:49.852 ************************************ 00:06:49.852 10:07:22 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:49.852 * Looking for test storage... 00:06:49.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:49.852 10:07:23 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:49.852 10:07:23 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62315 00:06:49.852 10:07:23 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:49.852 10:07:23 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62315 00:06:49.852 10:07:23 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62315 ']' 00:06:49.852 10:07:23 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.852 10:07:23 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.852 10:07:23 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.853 10:07:23 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.853 10:07:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:50.110 [2024-07-25 10:07:23.116386] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:50.110 [2024-07-25 10:07:23.116501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62315 ] 00:06:50.110 [2024-07-25 10:07:23.257151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.110 [2024-07-25 10:07:23.352246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.039 10:07:23 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.039 10:07:23 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:51.039 10:07:23 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:51.039 { 00:06:51.039 "version": "SPDK v24.09-pre git sha1 c5d7cded4", 00:06:51.039 "fields": { 00:06:51.039 "major": 24, 00:06:51.039 "minor": 9, 00:06:51.039 "patch": 0, 00:06:51.039 "suffix": "-pre", 00:06:51.039 "commit": "c5d7cded4" 00:06:51.039 } 00:06:51.039 } 00:06:51.039 10:07:24 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:51.039 10:07:24 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:51.039 10:07:24 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:51.039 10:07:24 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:51.039 10:07:24 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:51.039 10:07:24 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.039 10:07:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:51.039 10:07:24 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:51.039 10:07:24 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:51.039 10:07:24 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.039 10:07:24 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:51.039 10:07:24 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:51.039 10:07:24 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.039 10:07:24 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:51.039 10:07:24 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.039 10:07:24 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.039 10:07:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.039 10:07:24 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.039 10:07:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.039 10:07:24 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.039 10:07:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.039 10:07:24 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.039 10:07:24 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:51.039 10:07:24 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.297 request: 00:06:51.297 { 00:06:51.297 "method": "env_dpdk_get_mem_stats", 00:06:51.297 "req_id": 1 00:06:51.297 } 00:06:51.297 Got JSON-RPC error response 00:06:51.297 response: 00:06:51.297 { 00:06:51.297 "code": -32601, 00:06:51.297 "message": "Method not found" 00:06:51.297 } 00:06:51.297 10:07:24 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:51.297 10:07:24 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.297 10:07:24 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.297 10:07:24 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.297 10:07:24 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62315 00:06:51.297 10:07:24 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62315 ']' 00:06:51.297 10:07:24 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62315 00:06:51.297 10:07:24 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:51.297 10:07:24 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.297 10:07:24 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62315 00:06:51.297 killing process with pid 62315 00:06:51.297 10:07:24 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.297 10:07:24 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.297 10:07:24 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62315' 00:06:51.297 10:07:24 app_cmdline -- common/autotest_common.sh@967 -- # kill 62315 00:06:51.297 10:07:24 app_cmdline -- common/autotest_common.sh@972 -- # wait 62315 00:06:51.862 00:06:51.862 real 0m1.942s 00:06:51.862 user 0m2.352s 00:06:51.862 sys 0m0.471s 00:06:51.862 10:07:24 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.862 10:07:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:51.862 ************************************ 00:06:51.862 END TEST app_cmdline 00:06:51.862 ************************************ 00:06:51.862 10:07:24 -- common/autotest_common.sh@1142 -- # return 0 00:06:51.862 10:07:24 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:51.862 10:07:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.862 10:07:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.862 10:07:24 -- common/autotest_common.sh@10 -- # set +x 00:06:51.862 ************************************ 00:06:51.862 START TEST version 00:06:51.862 ************************************ 00:06:51.862 10:07:24 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:51.862 * Looking for test storage... 00:06:51.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:51.862 10:07:25 version -- app/version.sh@17 -- # get_header_version major 00:06:51.862 10:07:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:51.862 10:07:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.862 10:07:25 version -- app/version.sh@14 -- # cut -f2 00:06:51.862 10:07:25 version -- app/version.sh@17 -- # major=24 00:06:51.862 10:07:25 version -- app/version.sh@18 -- # get_header_version minor 00:06:51.862 10:07:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:51.862 10:07:25 version -- app/version.sh@14 -- # cut -f2 00:06:51.862 10:07:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.862 10:07:25 version -- app/version.sh@18 -- # minor=9 00:06:51.862 10:07:25 version -- app/version.sh@19 -- # get_header_version patch 00:06:51.862 10:07:25 version -- app/version.sh@14 -- # cut -f2 00:06:51.862 10:07:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:51.862 10:07:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.862 10:07:25 version -- app/version.sh@19 -- # patch=0 00:06:51.862 10:07:25 version -- app/version.sh@20 -- # get_header_version suffix 00:06:51.862 10:07:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:51.862 10:07:25 version -- app/version.sh@14 -- # cut -f2 00:06:51.862 10:07:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.862 10:07:25 version -- app/version.sh@20 -- # suffix=-pre 00:06:51.862 10:07:25 version -- app/version.sh@22 -- # version=24.9 00:06:51.862 10:07:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:51.862 10:07:25 version -- app/version.sh@28 -- # version=24.9rc0 00:06:51.862 10:07:25 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:51.862 10:07:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:51.862 10:07:25 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:51.862 10:07:25 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:51.862 00:06:51.862 real 0m0.176s 00:06:51.862 user 0m0.088s 00:06:51.862 sys 0m0.129s 00:06:51.862 ************************************ 00:06:51.862 END TEST version 00:06:51.862 ************************************ 00:06:51.862 10:07:25 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.862 10:07:25 version -- common/autotest_common.sh@10 -- # set +x 00:06:52.121 10:07:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:52.121 10:07:25 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:52.121 10:07:25 -- spdk/autotest.sh@198 -- # uname -s 00:06:52.121 10:07:25 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:52.121 10:07:25 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:52.121 10:07:25 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:52.121 10:07:25 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:52.121 10:07:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:52.121 10:07:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:52.121 10:07:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:52.121 10:07:25 -- common/autotest_common.sh@10 -- # set +x 00:06:52.121 10:07:25 -- spdk/autotest.sh@262 -- # '[' 1 -eq 1 ']' 00:06:52.121 10:07:25 -- spdk/autotest.sh@263 -- # run_test iscsi_tgt /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:06:52.121 10:07:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.121 10:07:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.121 10:07:25 -- common/autotest_common.sh@10 -- # set +x 00:06:52.121 ************************************ 00:06:52.121 START TEST iscsi_tgt 00:06:52.121 ************************************ 00:06:52.121 10:07:25 iscsi_tgt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:06:52.121 * Looking for test storage... 00:06:52.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # uname -s 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@18 -- # iscsicleanup 00:06:52.121 Cleaning up iSCSI connection 00:06:52.121 10:07:25 iscsi_tgt -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:06:52.121 10:07:25 iscsi_tgt -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:06:52.121 iscsiadm: No matching sessions found 00:06:52.121 10:07:25 iscsi_tgt -- common/autotest_common.sh@981 -- # true 00:06:52.121 10:07:25 iscsi_tgt -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:06:52.121 iscsiadm: No records found 00:06:52.121 10:07:25 iscsi_tgt -- common/autotest_common.sh@982 -- # true 00:06:52.121 10:07:25 iscsi_tgt -- common/autotest_common.sh@983 -- # rm -rf 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@21 -- # create_veth_interfaces 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # ip link set init_br nomaster 00:06:52.121 Cannot find device "init_br" 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # true 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # ip link set tgt_br nomaster 00:06:52.121 Cannot find device "tgt_br" 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # true 00:06:52.121 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # ip link set tgt_br2 nomaster 00:06:52.379 Cannot find device "tgt_br2" 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # true 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # ip link set init_br down 00:06:52.379 Cannot find device "init_br" 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # true 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # ip link set tgt_br down 00:06:52.379 Cannot find device "tgt_br" 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # true 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # ip link set tgt_br2 down 00:06:52.379 Cannot find device "tgt_br2" 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # true 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # ip link delete iscsi_br type bridge 00:06:52.379 Cannot find device "iscsi_br" 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # true 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # ip link delete spdk_init_int 00:06:52.379 Cannot find device "spdk_init_int" 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # true 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:06:52.379 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # true 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:06:52.379 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # true 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # ip netns del spdk_iscsi_ns 00:06:52.379 Cannot remove namespace file "/var/run/netns/spdk_iscsi_ns": No such file or directory 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # true 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@44 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@47 -- # ip netns add spdk_iscsi_ns 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@50 -- # ip link add spdk_init_int type veth peer name init_br 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@51 -- # ip link add spdk_tgt_int type veth peer name tgt_br 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@52 -- # ip link add spdk_tgt_int2 type veth peer name tgt_br2 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@55 -- # ip link set spdk_tgt_int netns spdk_iscsi_ns 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@56 -- # ip link set spdk_tgt_int2 netns spdk_iscsi_ns 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@59 -- # ip addr add 10.0.0.2/24 dev spdk_init_int 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@60 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.1/24 dev spdk_tgt_int 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@61 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.3/24 dev spdk_tgt_int2 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@64 -- # ip link set spdk_init_int up 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@65 -- # ip link set init_br up 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@66 -- # ip link set tgt_br up 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@67 -- # ip link set tgt_br2 up 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@68 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int up 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@69 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int2 up 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@70 -- # ip netns exec spdk_iscsi_ns ip link set lo up 00:06:52.379 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@73 -- # ip link add iscsi_br type bridge 00:06:52.636 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@74 -- # ip link set iscsi_br up 00:06:52.636 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@77 -- # ip link set init_br master iscsi_br 00:06:52.636 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@78 -- # ip link set tgt_br master iscsi_br 00:06:52.636 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@79 -- # ip link set tgt_br2 master iscsi_br 00:06:52.636 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@82 -- # iptables -I INPUT 1 -i spdk_init_int -p tcp --dport 3260 -j ACCEPT 00:06:52.636 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@83 -- # iptables -A FORWARD -i iscsi_br -o iscsi_br -j ACCEPT 00:06:52.636 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@86 -- # ping -c 1 10.0.0.1 00:06:52.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:06:52.636 00:06:52.636 --- 10.0.0.1 ping statistics --- 00:06:52.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.636 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:06:52.636 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@87 -- # ping -c 1 10.0.0.3 00:06:52.636 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:52.636 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:06:52.636 00:06:52.637 --- 10.0.0.3 ping statistics --- 00:06:52.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.637 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:06:52.637 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@88 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:06:52.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.030 ms 00:06:52.637 00:06:52.637 --- 10.0.0.2 ping statistics --- 00:06:52.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.637 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:06:52.637 10:07:25 iscsi_tgt -- iscsi_tgt/common.sh@89 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:06:52.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.019 ms 00:06:52.637 00:06:52.637 --- 10.0.0.2 ping statistics --- 00:06:52.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.637 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:06:52.637 10:07:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@23 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:06:52.637 10:07:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@25 -- # run_test iscsi_tgt_sock /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:06:52.637 10:07:25 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.637 10:07:25 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.637 10:07:25 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:06:52.637 ************************************ 00:06:52.637 START TEST iscsi_tgt_sock 00:06:52.637 ************************************ 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:06:52.637 * Looking for test storage... 00:06:52.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@48 -- # iscsitestinit 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@50 -- # HELLO_SOCK_APP='ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/examples/hello_sock' 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@51 -- # SOCAT_APP=socat 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@52 -- # OPENSSL_APP=openssl 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@53 -- # PSK='-N ssl -E 1234567890ABCDEF -I psk.spdk.io' 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@58 -- # timing_enter sock_client 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:52.637 10:07:25 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:06:52.894 Testing client path 00:06:52.895 10:07:25 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@59 -- # echo 'Testing client path' 00:06:52.895 Waiting for process to start up and listen on address 10.0.0.2:3260... 00:06:52.895 10:07:25 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@63 -- # server_pid=62628 00:06:52.895 10:07:25 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@64 -- # trap 'killprocess $server_pid;iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:06:52.895 10:07:25 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@66 -- # waitfortcp 62628 10.0.0.2:3260 00:06:52.895 10:07:25 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@25 -- # local addr=10.0.0.2:3260 00:06:52.895 10:07:25 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@27 -- # echo 'Waiting for process to start up and listen on address 10.0.0.2:3260...' 00:06:52.895 10:07:25 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@62 -- # socat tcp-l:3260,fork,bind=10.0.0.2 exec:/bin/cat 00:06:52.895 10:07:25 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@29 -- # xtrace_disable 00:06:52.895 10:07:25 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:06:53.460 [2024-07-25 10:07:26.437188] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:53.460 [2024-07-25 10:07:26.437276] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62638 ] 00:06:53.460 [2024-07-25 10:07:26.576025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.460 [2024-07-25 10:07:26.704032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.460 [2024-07-25 10:07:26.704117] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:06:53.460 [2024-07-25 10:07:26.704162] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:06:53.460 [2024-07-25 10:07:26.704363] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 35230) 00:06:53.460 [2024-07-25 10:07:26.704474] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:06:54.831 [2024-07-25 10:07:27.704508] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:06:54.831 [2024-07-25 10:07:27.704635] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:06:54.831 [2024-07-25 10:07:27.813783] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:54.831 [2024-07-25 10:07:27.813883] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62662 ] 00:06:54.831 [2024-07-25 10:07:27.957253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.831 [2024-07-25 10:07:28.057103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.831 [2024-07-25 10:07:28.057167] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:06:54.831 [2024-07-25 10:07:28.057189] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:06:54.831 [2024-07-25 10:07:28.057324] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 35240) 00:06:54.831 [2024-07-25 10:07:28.057375] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:06:56.201 [2024-07-25 10:07:29.057403] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:06:56.201 [2024-07-25 10:07:29.057603] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:06:56.201 [2024-07-25 10:07:29.166111] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:56.201 [2024-07-25 10:07:29.166214] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62681 ] 00:06:56.201 [2024-07-25 10:07:29.311580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.201 [2024-07-25 10:07:29.411811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.201 [2024-07-25 10:07:29.411876] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:06:56.201 [2024-07-25 10:07:29.411897] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:06:56.201 [2024-07-25 10:07:29.412147] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 35242) 00:06:56.201 [2024-07-25 10:07:29.412204] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:06:57.572 [2024-07-25 10:07:30.412232] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:06:57.572 [2024-07-25 10:07:30.412434] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:06:57.572 killing process with pid 62628 00:06:57.572 Testing SSL server path 00:06:57.572 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:06:57.572 [2024-07-25 10:07:30.614470] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:57.572 [2024-07-25 10:07:30.614568] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62725 ] 00:06:57.572 [2024-07-25 10:07:30.756856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.830 [2024-07-25 10:07:30.850222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.830 [2024-07-25 10:07:30.850278] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:06:57.830 [2024-07-25 10:07:30.850352] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(ssl) 00:06:58.088 [2024-07-25 10:07:31.128767] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:58.088 [2024-07-25 10:07:31.128865] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62735 ] 00:06:58.088 [2024-07-25 10:07:31.268033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.346 [2024-07-25 10:07:31.391583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.346 [2024-07-25 10:07:31.391877] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:06:58.346 [2024-07-25 10:07:31.392071] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:06:58.346 [2024-07-25 10:07:31.394488] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 34884) to (10.0.0.1, 3260) 00:06:58.346 [2024-07-25 10:07:31.395018] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 34884) 00:06:58.346 [2024-07-25 10:07:31.396485] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:06:59.304 [2024-07-25 10:07:32.396680] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:06:59.304 [2024-07-25 10:07:32.397054] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:06:59.304 [2024-07-25 10:07:32.397105] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:06:59.304 [2024-07-25 10:07:32.501478] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:06:59.304 [2024-07-25 10:07:32.501778] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62758 ] 00:06:59.561 [2024-07-25 10:07:32.641966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.561 [2024-07-25 10:07:32.752106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.561 [2024-07-25 10:07:32.752389] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:06:59.561 [2024-07-25 10:07:32.752574] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:06:59.561 [2024-07-25 10:07:32.753431] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 50530) to (10.0.0.1, 3260) 00:06:59.561 [2024-07-25 10:07:32.754384] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 50530) 00:06:59.561 [2024-07-25 10:07:32.755182] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:00.930 [2024-07-25 10:07:33.755231] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:00.930 [2024-07-25 10:07:33.755378] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:00.930 [2024-07-25 10:07:33.755435] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:00.930 [2024-07-25 10:07:33.863599] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:07:00.930 [2024-07-25 10:07:33.863717] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62774 ] 00:07:00.930 [2024-07-25 10:07:34.006280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.930 [2024-07-25 10:07:34.089815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.930 [2024-07-25 10:07:34.090041] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:00.930 [2024-07-25 10:07:34.090143] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:00.930 [2024-07-25 10:07:34.090768] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 50540) to (10.0.0.1, 3260) 00:07:00.930 [2024-07-25 10:07:34.092460] posix.c: 755:posix_sock_create_ssl_context: *ERROR*: Incorrect TLS version provided: 7 00:07:00.930 [2024-07-25 10:07:34.092617] posix.c:1033:posix_sock_create: *ERROR*: posix_sock_create_ssl_context() failed, errno = 2 00:07:00.930 [2024-07-25 10:07:34.092719] hello_sock.c: 309:hello_sock_connect: *ERROR*: connect error(2): No such file or directory 00:07:00.930 [2024-07-25 10:07:34.092768] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:00.930 [2024-07-25 10:07:34.092817] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.930 [2024-07-25 10:07:34.092914] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:07:00.930 [2024-07-25 10:07:34.092946] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:01.187 [2024-07-25 10:07:34.193584] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:07:01.187 [2024-07-25 10:07:34.193923] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62783 ] 00:07:01.187 [2024-07-25 10:07:34.329531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.187 [2024-07-25 10:07:34.420250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.187 [2024-07-25 10:07:34.420517] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:01.187 [2024-07-25 10:07:34.420603] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:01.187 [2024-07-25 10:07:34.421945] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 50556) to (10.0.0.1, 3260) 00:07:01.187 [2024-07-25 10:07:34.422494] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 50556) 00:07:01.187 [2024-07-25 10:07:34.423507] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:02.561 [2024-07-25 10:07:35.423644] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:02.561 [2024-07-25 10:07:35.423950] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:02.561 [2024-07-25 10:07:35.423997] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:02.561 SSL_connect:before SSL initialization 00:07:02.561 [2024-07-25 10:07:35.552096] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 5869SSL_connect:SSLv3/TLS write client hello 00:07:02.561 0) to (10.0.0.1, 3260) 00:07:02.561 SSL_connect:SSLv3/TLS write client hello 00:07:02.561 SSL_connect:SSLv3/TLS read server hello 00:07:02.561 Can't use SSL_get_servername 00:07:02.561 SSL_connect:TLSv1.3 read encrypted extensions 00:07:02.561 SSL_connect:SSLv3/TLS read finished 00:07:02.561 SSL_connect:SSLv3/TLS write change cipher spec 00:07:02.561 SSL_connect:SSLv3/TLS write finished 00:07:02.561 SSL_connect:SSL negotiation finished successfully 00:07:02.561 SSL_connect:SSL negotiation finished successfully 00:07:02.561 SSL_connect:SSLv3/TLS read server session ticket 00:07:04.458 DONE 00:07:04.458 SSL3 alert write:warning:close notify 00:07:04.458 [2024-07-25 10:07:37.515986] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:04.458 [2024-07-25 10:07:37.552389] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:07:04.458 [2024-07-25 10:07:37.552488] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62833 ] 00:07:04.458 [2024-07-25 10:07:37.698787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.716 [2024-07-25 10:07:37.811593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.716 [2024-07-25 10:07:37.811931] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:04.716 [2024-07-25 10:07:37.812087] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:04.716 [2024-07-25 10:07:37.812740] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 50560) to (10.0.0.1, 3260) 00:07:04.716 [2024-07-25 10:07:37.814956] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 50560) 00:07:04.716 [2024-07-25 10:07:37.815568] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:04.716 [2024-07-25 10:07:37.815572] hello_sock.c: 240:hello_sock_writev_poll: *ERROR*: Write to socket failed. Closing connection... 00:07:04.716 [2024-07-25 10:07:37.815612] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:07:05.648 [2024-07-25 10:07:38.815611] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:05.648 [2024-07-25 10:07:38.815787] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.648 [2024-07-25 10:07:38.815835] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:07:05.648 [2024-07-25 10:07:38.815843] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:05.906 [2024-07-25 10:07:38.905168] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:07:05.906 [2024-07-25 10:07:38.905244] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62847 ] 00:07:05.906 [2024-07-25 10:07:39.036433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.906 [2024-07-25 10:07:39.129317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.906 [2024-07-25 10:07:39.129599] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:05.906 [2024-07-25 10:07:39.129699] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:05.907 [2024-07-25 10:07:39.130474] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 50570) to (10.0.0.1, 3260) 00:07:05.907 [2024-07-25 10:07:39.131528] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 50570) 00:07:05.907 [2024-07-25 10:07:39.132046] posix.c: 586:posix_sock_psk_find_session_server_cb: *ERROR*: Unknown Client's PSK ID 00:07:05.907 [2024-07-25 10:07:39.132187] hello_sock.c: 240:hello_sock_writev_poll: *ERROR*: Write to socket failed. Closing connection... 00:07:05.907 [2024-07-25 10:07:39.132190] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:05.907 [2024-07-25 10:07:39.132309] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:07:07.282 [2024-07-25 10:07:40.132310] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:07.282 [2024-07-25 10:07:40.132739] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.282 [2024-07-25 10:07:40.132827] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:07:07.282 [2024-07-25 10:07:40.132919] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:07.282 killing process with pid 62725 00:07:08.217 [2024-07-25 10:07:41.242988] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:08.217 [2024-07-25 10:07:41.243354] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:08.217 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:07:08.217 [2024-07-25 10:07:41.399751] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:07:08.217 [2024-07-25 10:07:41.399839] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62892 ] 00:07:08.476 [2024-07-25 10:07:41.543079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.476 [2024-07-25 10:07:41.628506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.476 [2024-07-25 10:07:41.628571] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:08.476 [2024-07-25 10:07:41.628639] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(posix) 00:07:08.735 [2024-07-25 10:07:41.900849] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 58698) to (10.0.0.1, 3260) 00:07:08.735 [2024-07-25 10:07:41.900972] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:08.735 killing process with pid 62892 00:07:10.108 [2024-07-25 10:07:42.932858] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:10.108 [2024-07-25 10:07:42.932999] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:10.108 ************************************ 00:07:10.108 END TEST iscsi_tgt_sock 00:07:10.108 ************************************ 00:07:10.108 00:07:10.108 real 0m17.272s 00:07:10.108 user 0m19.635s 00:07:10.108 sys 0m3.171s 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:07:10.108 10:07:43 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:07:10.108 10:07:43 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@26 -- # [[ -d /usr/local/calsoft ]] 00:07:10.108 10:07:43 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@27 -- # run_test iscsi_tgt_calsoft /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:07:10.108 10:07:43 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.108 10:07:43 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.108 10:07:43 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:07:10.108 ************************************ 00:07:10.108 START TEST iscsi_tgt_calsoft 00:07:10.108 ************************************ 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:07:10.108 * Looking for test storage... 00:07:10.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@15 -- # MALLOC_BDEV_SIZE=64 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@16 -- # MALLOC_BLOCK_SIZE=512 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@18 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@19 -- # calsoft_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@22 -- # mkdir -p /usr/local/etc 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@23 -- # cp /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/its.conf /usr/local/etc/ 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@26 -- # echo IP=10.0.0.1 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@28 -- # timing_enter start_iscsi_tgt 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@30 -- # iscsitestinit 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@33 -- # pid=62984 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@32 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x1 --wait-for-rpc 00:07:10.108 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@34 -- # echo 'Process pid: 62984' 00:07:10.108 Process pid: 62984 00:07:10.109 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@36 -- # trap 'killprocess $pid; delete_tmp_conf_files; iscsitestfini; exit 1 ' SIGINT SIGTERM EXIT 00:07:10.109 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@38 -- # waitforlisten 62984 00:07:10.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.109 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@829 -- # '[' -z 62984 ']' 00:07:10.109 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.109 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.109 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.109 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.109 10:07:43 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:07:10.109 [2024-07-25 10:07:43.326803] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:07:10.109 [2024-07-25 10:07:43.327154] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62984 ] 00:07:10.366 [2024-07-25 10:07:43.470907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.366 [2024-07-25 10:07:43.567148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.330 10:07:44 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.330 10:07:44 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@862 -- # return 0 00:07:11.330 10:07:44 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:07:11.330 10:07:44 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:07:11.587 iscsi_tgt is listening. Running tests... 00:07:11.587 10:07:44 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@41 -- # echo 'iscsi_tgt is listening. Running tests...' 00:07:11.587 10:07:44 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@43 -- # timing_exit start_iscsi_tgt 00:07:11.587 10:07:44 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:11.587 10:07:44 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:07:11.846 10:07:44 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_auth_group 1 -c 'user:root secret:tester' 00:07:12.104 10:07:45 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_discovery_auth -g 1 00:07:12.104 10:07:45 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:07:12.362 10:07:45 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:07:12.619 10:07:45 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create -b MyBdev 64 512 00:07:12.877 MyBdev 00:07:12.877 10:07:46 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias MyBdev:0 1:2 64 -g 1 00:07:13.134 10:07:46 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@55 -- # sleep 1 00:07:14.068 10:07:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@57 -- # '[' '' ']' 00:07:14.068 10:07:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py /home/vagrant/spdk_repo/spdk/../output 00:07:14.068 [2024-07-25 10:07:47.285294] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:07:14.326 [2024-07-25 10:07:47.347333] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:14.326 [2024-07-25 10:07:47.367139] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:14.326 [2024-07-25 10:07:47.367235] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:14.326 [2024-07-25 10:07:47.403773] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:14.326 [2024-07-25 10:07:47.442535] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:14.326 [2024-07-25 10:07:47.442639] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:14.326 [2024-07-25 10:07:47.481993] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:14.326 [2024-07-25 10:07:47.482082] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:14.326 [2024-07-25 10:07:47.515577] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:07:14.326 [2024-07-25 10:07:47.554287] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:14.326 [2024-07-25 10:07:47.574189] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:14.326 [2024-07-25 10:07:47.574280] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:07:14.326 [2024-07-25 10:07:47.574647] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:07:14.583 [2024-07-25 10:07:47.592844] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:14.583 [2024-07-25 10:07:47.592935] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:14.583 [2024-07-25 10:07:47.631805] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:14.583 [2024-07-25 10:07:47.653049] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:14.583 [2024-07-25 10:07:47.726864] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:14.583 [2024-07-25 10:07:47.726998] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:14.583 [2024-07-25 10:07:47.741131] iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:07:14.583 [2024-07-25 10:07:47.741225] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:14.841 [2024-07-25 10:07:47.874461] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:14.841 [2024-07-25 10:07:47.909792] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:14.841 [2024-07-25 10:07:47.909901] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:14.841 [2024-07-25 10:07:47.928038] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:14.841 [2024-07-25 10:07:47.983067] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:14.841 [2024-07-25 10:07:48.003622] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:14.842 [2024-07-25 10:07:48.058198] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:15.099 [2024-07-25 10:07:48.187305] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:07:15.099 [2024-07-25 10:07:48.207592] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:15.099 [2024-07-25 10:07:48.227589] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:15.099 [2024-07-25 10:07:48.261753] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:15.099 [2024-07-25 10:07:48.293554] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:15.099 [2024-07-25 10:07:48.293995] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:15.099 [2024-07-25 10:07:48.312965] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(341) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:07:15.099 [2024-07-25 10:07:48.313052] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(8) ignore (ExpCmdSN=9, MaxCmdSN=71) 00:07:15.099 [2024-07-25 10:07:48.313586] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:07:15.099 [2024-07-25 10:07:48.354385] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(3) error ExpCmdSN=4 00:07:15.099 [2024-07-25 10:07:48.354543] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:15.358 [2024-07-25 10:07:48.447256] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:07:15.358 [2024-07-25 10:07:48.479663] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:07:15.358 [2024-07-25 10:07:48.494797] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:15.358 [2024-07-25 10:07:48.494890] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:15.358 [2024-07-25 10:07:48.532637] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:07:15.358 [2024-07-25 10:07:48.550644] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:15.358 [2024-07-25 10:07:48.566396] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:15.358 [2024-07-25 10:07:48.583393] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:07:15.358 PDU 00:07:15.358 00000000 01 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:07:15.358 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:07:15.358 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:07:15.358 [2024-07-25 10:07:48.583442] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:07:15.358 [2024-07-25 10:07:48.600563] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:15.358 [2024-07-25 10:07:48.600786] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:15.615 [2024-07-25 10:07:48.620851] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:07:15.873 [2024-07-25 10:07:48.949728] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:15.873 [2024-07-25 10:07:48.970377] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:15.873 [2024-07-25 10:07:48.970474] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:15.873 [2024-07-25 10:07:49.018264] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:15.873 [2024-07-25 10:07:49.018393] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:15.873 [2024-07-25 10:07:49.047093] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:15.873 [2024-07-25 10:07:49.066084] param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 276 00:07:15.873 [2024-07-25 10:07:49.066130] iscsi.c:1303:iscsi_op_login_store_incoming_params: *ERROR*: iscsi_parse_params() failed 00:07:15.873 [2024-07-25 10:07:49.084459] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:15.873 [2024-07-25 10:07:49.084556] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:15.873 [2024-07-25 10:07:49.101136] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:15.873 [2024-07-25 10:07:49.101231] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:15.873 [2024-07-25 10:07:49.118935] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:07:15.873 [2024-07-25 10:07:49.119027] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:15.873 [2024-07-25 10:07:49.119276] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:07:15.873 [2024-07-25 10:07:49.119341] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=6, MaxCmdSN=67) 00:07:15.873 [2024-07-25 10:07:49.119881] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:07:16.134 [2024-07-25 10:07:49.187695] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:16.134 [2024-07-25 10:07:49.203126] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:16.134 [2024-07-25 10:07:49.219913] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:07:16.134 [2024-07-25 10:07:49.250045] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:07:16.134 [2024-07-25 10:07:49.270521] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:07:16.134 [2024-07-25 10:07:49.286640] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:16.134 [2024-07-25 10:07:49.303117] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:16.134 [2024-07-25 10:07:49.303209] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:16.134 [2024-07-25 10:07:49.321620] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:16.134 [2024-07-25 10:07:49.321752] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:16.134 [2024-07-25 10:07:49.337595] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:07:16.134 [2024-07-25 10:07:49.337769] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:07:16.134 [2024-07-25 10:07:49.355010] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:07:16.134 [2024-07-25 10:07:49.372091] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:16.134 [2024-07-25 10:07:49.372176] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:16.472 [2024-07-25 10:07:49.434670] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:16.472 [2024-07-25 10:07:49.451340] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(2) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:07:16.472 [2024-07-25 10:07:49.451437] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:16.472 [2024-07-25 10:07:49.451484] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:16.472 [2024-07-25 10:07:49.467060] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:16.472 [2024-07-25 10:07:49.467165] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:16.472 [2024-07-25 10:07:49.502949] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:16.472 [2024-07-25 10:07:49.521018] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:16.472 [2024-07-25 10:07:49.521065] iscsi.c:3961:iscsi_handle_recovery_datain: *ERROR*: Initiator requests BegRun: 0x00000000, RunLength:0x00001000 greater than maximum DataSN: 0x00000004. 00:07:16.472 [2024-07-25 10:07:49.521077] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=10) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:07:16.472 [2024-07-25 10:07:49.521086] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:07:16.472 [2024-07-25 10:07:49.540708] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:16.472 [2024-07-25 10:07:49.577066] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:16.472 [2024-07-25 10:07:49.577163] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:16.472 [2024-07-25 10:07:49.609812] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:16.472 [2024-07-25 10:07:49.609903] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:16.472 [2024-07-25 10:07:49.628761] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:16.472 [2024-07-25 10:07:49.628871] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:16.472 [2024-07-25 10:07:49.663211] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:16.472 [2024-07-25 10:07:49.663427] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:16.472 [2024-07-25 10:07:49.696306] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:16.472 [2024-07-25 10:07:49.696403] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:16.472 [2024-07-25 10:07:49.713783] iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 2745410467, and the dataout task tag is 2728567458 00:07:16.472 [2024-07-25 10:07:49.713886] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:16.472 [2024-07-25 10:07:49.714115] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:16.472 [2024-07-25 10:07:49.714179] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:07:16.730 [2024-07-25 10:07:49.730666] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:16.730 [2024-07-25 10:07:49.730777] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:16.730 [2024-07-25 10:07:49.764841] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:16.730 [2024-07-25 10:07:49.783110] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key ImmediateDataa 00:07:16.730 [2024-07-25 10:07:49.802981] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:16.730 [2024-07-25 10:07:49.827699] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:07:16.730 [2024-07-25 10:07:49.866014] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:16.730 [2024-07-25 10:07:49.866302] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:16.730 [2024-07-25 10:07:49.884739] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:16.730 [2024-07-25 10:07:49.959293] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:07:16.730 [2024-07-25 10:07:49.977062] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:16.730 [2024-07-25 10:07:49.977165] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:16.988 [2024-07-25 10:07:50.011614] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:16.988 [2024-07-25 10:07:50.011732] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:16.988 [2024-07-25 10:07:50.066313] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:07:16.988 [2024-07-25 10:07:50.126023] iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:16.988 [2024-07-25 10:07:50.126064] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:07:16.988 [2024-07-25 10:07:50.126075] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:07:16.988 [2024-07-25 10:07:50.142534] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:16.988 [2024-07-25 10:07:50.142656] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:16.989 [2024-07-25 10:07:50.160899] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:16.989 [2024-07-25 10:07:50.195521] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:16.989 [2024-07-25 10:07:50.210296] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:07:16.989 PDU 00:07:16.989 00000000 00 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:07:16.989 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:07:16.989 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:07:16.989 [2024-07-25 10:07:50.210336] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:07:16.989 [2024-07-25 10:07:50.229165] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=ffffffff 00:07:17.247 [2024-07-25 10:07:50.247759] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:17.247 [2024-07-25 10:07:50.247860] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:17.247 [2024-07-25 10:07:50.318230] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:17.247 [2024-07-25 10:07:50.318343] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:17.247 [2024-07-25 10:07:50.336997] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:17.247 [2024-07-25 10:07:50.337106] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:17.247 [2024-07-25 10:07:50.403150] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:17.505 [2024-07-25 10:07:50.535609] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:19.404 [2024-07-25 10:07:52.494936] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:19.404 [2024-07-25 10:07:52.515638] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:19.404 [2024-07-25 10:07:52.515740] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:19.404 [2024-07-25 10:07:52.531161] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:19.404 [2024-07-25 10:07:52.531250] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:19.404 [2024-07-25 10:07:52.612339] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:19.404 [2024-07-25 10:07:52.612488] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:19.404 [2024-07-25 10:07:52.629641] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:20.777 [2024-07-25 10:07:53.666689] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:21.712 [2024-07-25 10:07:54.647073] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=6, MaxCmdSN=68) 00:07:21.712 [2024-07-25 10:07:54.647975] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=7 00:07:21.712 [2024-07-25 10:07:54.666919] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=5, MaxCmdSN=68) 00:07:22.731 [2024-07-25 10:07:55.667207] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=6, MaxCmdSN=69) 00:07:22.731 [2024-07-25 10:07:55.667348] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=7, MaxCmdSN=70) 00:07:22.731 [2024-07-25 10:07:55.667363] iscsi.c:4028:iscsi_handle_status_snack: *ERROR*: Unable to find StatSN: 0x00000007. For a StatusSNACK, assuming this is a proactive SNACK for an untransmitted StatSN, ignoring. 00:07:22.731 [2024-07-25 10:07:55.667377] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=8 00:07:35.033 [2024-07-25 10:08:07.713810] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:07:35.033 [2024-07-25 10:08:07.733264] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:07:35.033 [2024-07-25 10:08:07.754798] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:07:35.033 [2024-07-25 10:08:07.755563] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:07:35.033 [2024-07-25 10:08:07.773668] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:07:35.033 [2024-07-25 10:08:07.796750] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:07:35.033 [2024-07-25 10:08:07.818630] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:07:35.033 [2024-07-25 10:08:07.855497] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=64 00:07:35.033 [2024-07-25 10:08:07.859717] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:07:35.033 [2024-07-25 10:08:07.882197] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1107296256) error ExpCmdSN=66 00:07:35.033 [2024-07-25 10:08:07.896805] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:07:35.033 [2024-07-25 10:08:07.922741] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=67 00:07:35.033 Skipping tc_ffp_15_2. It is known to fail. 00:07:35.033 Skipping tc_ffp_29_2. It is known to fail. 00:07:35.033 Skipping tc_ffp_29_3. It is known to fail. 00:07:35.033 Skipping tc_ffp_29_4. It is known to fail. 00:07:35.033 Skipping tc_err_1_1. It is known to fail. 00:07:35.033 Skipping tc_err_1_2. It is known to fail. 00:07:35.033 Skipping tc_err_2_8. It is known to fail. 00:07:35.033 Skipping tc_err_3_1. It is known to fail. 00:07:35.033 Skipping tc_err_3_2. It is known to fail. 00:07:35.033 Skipping tc_err_3_3. It is known to fail. 00:07:35.033 Skipping tc_err_3_4. It is known to fail. 00:07:35.033 Skipping tc_err_5_1. It is known to fail. 00:07:35.033 Skipping tc_login_3_1. It is known to fail. 00:07:35.033 Skipping tc_login_11_2. It is known to fail. 00:07:35.033 Skipping tc_login_11_4. It is known to fail. 00:07:35.033 Skipping tc_login_2_2. It is known to fail. 00:07:35.033 Skipping tc_login_29_1. It is known to fail. 00:07:35.033 Cleaning up iSCSI connection 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@62 -- # failed=0 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@67 -- # iscsicleanup 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:07:35.033 iscsiadm: No matching sessions found 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@981 -- # true 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:07:35.033 iscsiadm: No records found 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # true 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@983 -- # rm -rf 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@68 -- # killprocess 62984 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@948 -- # '[' -z 62984 ']' 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@952 -- # kill -0 62984 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@953 -- # uname 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62984 00:07:35.033 killing process with pid 62984 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:35.033 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:35.034 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62984' 00:07:35.034 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@967 -- # kill 62984 00:07:35.034 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@972 -- # wait 62984 00:07:35.292 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@69 -- # delete_tmp_conf_files 00:07:35.292 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@12 -- # rm -f /usr/local/etc/its.conf 00:07:35.292 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@70 -- # iscsitestfini 00:07:35.292 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:07:35.292 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@71 -- # exit 0 00:07:35.292 ************************************ 00:07:35.292 END TEST iscsi_tgt_calsoft 00:07:35.292 ************************************ 00:07:35.292 00:07:35.292 real 0m25.284s 00:07:35.292 user 0m40.784s 00:07:35.292 sys 0m3.046s 00:07:35.292 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.292 10:08:08 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:07:35.292 10:08:08 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:07:35.292 10:08:08 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@31 -- # run_test iscsi_tgt_filesystem /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:07:35.292 10:08:08 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.292 10:08:08 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.292 10:08:08 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:07:35.292 ************************************ 00:07:35.292 START TEST iscsi_tgt_filesystem 00:07:35.292 ************************************ 00:07:35.292 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:07:35.553 * Looking for test storage... 00:07:35.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:07:35.553 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/setup/common.sh 00:07:35.553 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:35.553 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:35.553 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:35.553 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:35.553 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:35.553 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:35.553 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:35.553 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:35.553 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:35.554 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:35.554 #define SPDK_CONFIG_H 00:07:35.554 #define SPDK_CONFIG_APPS 1 00:07:35.554 #define SPDK_CONFIG_ARCH native 00:07:35.554 #undef SPDK_CONFIG_ASAN 00:07:35.554 #undef SPDK_CONFIG_AVAHI 00:07:35.554 #undef SPDK_CONFIG_CET 00:07:35.554 #define SPDK_CONFIG_COVERAGE 1 00:07:35.554 #define SPDK_CONFIG_CROSS_PREFIX 00:07:35.554 #undef SPDK_CONFIG_CRYPTO 00:07:35.554 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:35.554 #undef SPDK_CONFIG_CUSTOMOCF 00:07:35.554 #undef SPDK_CONFIG_DAOS 00:07:35.554 #define SPDK_CONFIG_DAOS_DIR 00:07:35.554 #define SPDK_CONFIG_DEBUG 1 00:07:35.554 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:35.554 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:35.554 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:35.554 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:35.554 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:35.554 #undef SPDK_CONFIG_DPDK_UADK 00:07:35.554 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:35.554 #define SPDK_CONFIG_EXAMPLES 1 00:07:35.554 #undef SPDK_CONFIG_FC 00:07:35.554 #define SPDK_CONFIG_FC_PATH 00:07:35.554 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:35.555 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:35.555 #undef SPDK_CONFIG_FUSE 00:07:35.555 #undef SPDK_CONFIG_FUZZER 00:07:35.555 #define SPDK_CONFIG_FUZZER_LIB 00:07:35.555 #undef SPDK_CONFIG_GOLANG 00:07:35.555 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:35.555 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:35.555 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:35.555 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:35.555 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:35.555 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:35.555 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:35.555 #define SPDK_CONFIG_IDXD 1 00:07:35.555 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:35.555 #undef SPDK_CONFIG_IPSEC_MB 00:07:35.555 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:35.555 #define SPDK_CONFIG_ISAL 1 00:07:35.555 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:35.555 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:35.555 #define SPDK_CONFIG_LIBDIR 00:07:35.555 #undef SPDK_CONFIG_LTO 00:07:35.555 #define SPDK_CONFIG_MAX_LCORES 128 00:07:35.555 #define SPDK_CONFIG_NVME_CUSE 1 00:07:35.555 #undef SPDK_CONFIG_OCF 00:07:35.555 #define SPDK_CONFIG_OCF_PATH 00:07:35.555 #define SPDK_CONFIG_OPENSSL_PATH 00:07:35.555 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:35.555 #define SPDK_CONFIG_PGO_DIR 00:07:35.555 #undef SPDK_CONFIG_PGO_USE 00:07:35.555 #define SPDK_CONFIG_PREFIX /usr/local 00:07:35.555 #undef SPDK_CONFIG_RAID5F 00:07:35.555 #define SPDK_CONFIG_RBD 1 00:07:35.555 #define SPDK_CONFIG_RDMA 1 00:07:35.555 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:35.555 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:35.555 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:35.555 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:35.555 #define SPDK_CONFIG_SHARED 1 00:07:35.555 #undef SPDK_CONFIG_SMA 00:07:35.555 #define SPDK_CONFIG_TESTS 1 00:07:35.555 #undef SPDK_CONFIG_TSAN 00:07:35.555 #define SPDK_CONFIG_UBLK 1 00:07:35.555 #define SPDK_CONFIG_UBSAN 1 00:07:35.555 #undef SPDK_CONFIG_UNIT_TESTS 00:07:35.555 #undef SPDK_CONFIG_URING 00:07:35.555 #define SPDK_CONFIG_URING_PATH 00:07:35.555 #undef SPDK_CONFIG_URING_ZNS 00:07:35.555 #undef SPDK_CONFIG_USDT 00:07:35.555 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:35.555 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:35.555 #undef SPDK_CONFIG_VFIO_USER 00:07:35.555 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:35.555 #define SPDK_CONFIG_VHOST 1 00:07:35.555 #define SPDK_CONFIG_VIRTIO 1 00:07:35.555 #undef SPDK_CONFIG_VTUNE 00:07:35.555 #define SPDK_CONFIG_VTUNE_DIR 00:07:35.555 #define SPDK_CONFIG_WERROR 1 00:07:35.555 #define SPDK_CONFIG_WPDK_DIR 00:07:35.555 #undef SPDK_CONFIG_XNVME 00:07:35.555 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # uname -s 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@70 -- # : 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@76 -- # : 1 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@78 -- # : 1 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@86 -- # : 0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@92 -- # : 0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@94 -- # : 0 00:07:35.555 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@104 -- # : 1 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@124 -- # : 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@138 -- # : 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@154 -- # : 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@167 -- # : 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:35.556 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@318 -- # [[ -z 63697 ]] 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@318 -- # kill -0 63697 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.jPLGgL 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem /tmp/spdk.jPLGgL/tests/filesystem /tmp/spdk.jPLGgL 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264516608 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2496167936 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10989568 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13794746368 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5233991680 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13794746368 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5233991680 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267748352 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=143360 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:35.557 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt/output 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=93584162816 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6118617088 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:35.558 * Looking for test storage... 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@374 -- # target_space=13794746368 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:07:35.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@11 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.558 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@13 -- # iscsitestinit 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@29 -- # timing_enter start_iscsi_tgt 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.559 Process pid: 63734 00:07:35.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@32 -- # pid=63734 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@33 -- # echo 'Process pid: 63734' 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@35 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@37 -- # waitforlisten 63734 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@829 -- # '[' -z 63734 ']' 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.559 10:08:08 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.818 [2024-07-25 10:08:08.836994] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:07:35.818 [2024-07-25 10:08:08.837345] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63734 ] 00:07:35.818 [2024-07-25 10:08:08.983504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.076 [2024-07-25 10:08:09.081925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.076 [2024-07-25 10:08:09.082081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.076 [2024-07-25 10:08:09.082270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.076 [2024-07-25 10:08:09.082272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.643 10:08:09 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.643 10:08:09 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@862 -- # return 0 00:07:36.643 10:08:09 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@38 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:07:36.643 10:08:09 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.643 10:08:09 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.643 10:08:09 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.643 10:08:09 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@39 -- # rpc_cmd framework_start_init 00:07:36.643 10:08:09 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.643 10:08:09 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.902 iscsi_tgt is listening. Running tests... 00:07:36.902 10:08:09 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.902 10:08:09 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@40 -- # echo 'iscsi_tgt is listening. Running tests...' 00:07:36.902 10:08:09 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@42 -- # timing_exit start_iscsi_tgt 00:07:36.902 10:08:09 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:36.902 10:08:09 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.902 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # get_first_nvme_bdf 00:07:36.902 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1524 -- # bdfs=() 00:07:36.902 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1524 -- # local bdfs 00:07:36.902 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:07:36.902 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:07:36.902 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # bdfs=() 00:07:36.902 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # local bdfs 00:07:36.902 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # bdf=0000:00:10.0 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@45 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@46 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@47 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:00:10.0 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.903 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.161 Nvme0n1 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # rpc_cmd bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # ls_guid=9f835824-9b3d-4da8-8161-a84a6bc787e6 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # get_lvs_free_mb 9f835824-9b3d-4da8-8161-a84a6bc787e6 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1364 -- # local lvs_uuid=9f835824-9b3d-4da8-8161-a84a6bc787e6 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1365 -- # local lvs_info 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1366 -- # local fc 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1367 -- # local cs 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_lvol_get_lvstores 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:07:37.161 { 00:07:37.161 "uuid": "9f835824-9b3d-4da8-8161-a84a6bc787e6", 00:07:37.161 "name": "lvs_0", 00:07:37.161 "base_bdev": "Nvme0n1", 00:07:37.161 "total_data_clusters": 1278, 00:07:37.161 "free_clusters": 1278, 00:07:37.161 "block_size": 4096, 00:07:37.161 "cluster_size": 4194304 00:07:37.161 } 00:07:37.161 ]' 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="9f835824-9b3d-4da8-8161-a84a6bc787e6") .free_clusters' 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1369 -- # fc=1278 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="9f835824-9b3d-4da8-8161-a84a6bc787e6") .cluster_size' 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1370 -- # cs=4194304 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1373 -- # free_mb=5112 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1374 -- # echo 5112 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # free_mb=5112 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@52 -- # '[' 5112 -gt 2048 ']' 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@53 -- # rpc_cmd bdev_lvol_create -u 9f835824-9b3d-4da8-8161-a84a6bc787e6 lbd_0 2048 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.161 f002920f-8254-4f46-aa03-bc730d407fc5 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@61 -- # lvol_name=lvs_0/lbd_0 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@62 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias lvs_0/lbd_0:0 1:2 256 -d 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.161 10:08:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@63 -- # sleep 1 00:07:38.095 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@65 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:07:38.354 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:07:38.354 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@66 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:07:38.355 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:07:38.355 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@67 -- # waitforiscsidevices 1 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@116 -- # local num=1 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:07:38.355 [2024-07-25 10:08:11.418621] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # n=1 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@123 -- # return 0 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # get_bdev_size lvs_0/lbd_0 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1378 -- # local bdev_name=lvs_0/lbd_0 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1380 -- # local bs 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1381 -- # local nb 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b lvs_0/lbd_0 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:38.355 { 00:07:38.355 "name": "f002920f-8254-4f46-aa03-bc730d407fc5", 00:07:38.355 "aliases": [ 00:07:38.355 "lvs_0/lbd_0" 00:07:38.355 ], 00:07:38.355 "product_name": "Logical Volume", 00:07:38.355 "block_size": 4096, 00:07:38.355 "num_blocks": 524288, 00:07:38.355 "uuid": "f002920f-8254-4f46-aa03-bc730d407fc5", 00:07:38.355 "assigned_rate_limits": { 00:07:38.355 "rw_ios_per_sec": 0, 00:07:38.355 "rw_mbytes_per_sec": 0, 00:07:38.355 "r_mbytes_per_sec": 0, 00:07:38.355 "w_mbytes_per_sec": 0 00:07:38.355 }, 00:07:38.355 "claimed": false, 00:07:38.355 "zoned": false, 00:07:38.355 "supported_io_types": { 00:07:38.355 "read": true, 00:07:38.355 "write": true, 00:07:38.355 "unmap": true, 00:07:38.355 "flush": false, 00:07:38.355 "reset": true, 00:07:38.355 "nvme_admin": false, 00:07:38.355 "nvme_io": false, 00:07:38.355 "nvme_io_md": false, 00:07:38.355 "write_zeroes": true, 00:07:38.355 "zcopy": false, 00:07:38.355 "get_zone_info": false, 00:07:38.355 "zone_management": false, 00:07:38.355 "zone_append": false, 00:07:38.355 "compare": false, 00:07:38.355 "compare_and_write": false, 00:07:38.355 "abort": false, 00:07:38.355 "seek_hole": true, 00:07:38.355 "seek_data": true, 00:07:38.355 "copy": false, 00:07:38.355 "nvme_iov_md": false 00:07:38.355 }, 00:07:38.355 "driver_specific": { 00:07:38.355 "lvol": { 00:07:38.355 "lvol_store_uuid": "9f835824-9b3d-4da8-8161-a84a6bc787e6", 00:07:38.355 "base_bdev": "Nvme0n1", 00:07:38.355 "thin_provision": false, 00:07:38.355 "num_allocated_clusters": 512, 00:07:38.355 "snapshot": false, 00:07:38.355 "clone": false, 00:07:38.355 "esnap_clone": false 00:07:38.355 } 00:07:38.355 } 00:07:38.355 } 00:07:38.355 ]' 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1383 -- # bs=4096 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1384 -- # nb=524288 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1387 -- # bdev_size=2048 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1388 -- # echo 2048 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # lvol_size=2147483648 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@70 -- # trap 'iscsicleanup; remove_backends; umount /mnt/device; rm -rf /mnt/device; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@72 -- # mkdir -p /mnt/device 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # grep 'Attached scsi disk' 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # iscsiadm -m session -P 3 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # awk '{print $4}' 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # dev=sda 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@76 -- # waitforfile /dev/sda 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1265 -- # local i=0 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1276 -- # return 0 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # sec_size_to_bytes sda 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@76 -- # local dev=sda 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@78 -- # [[ -e /sys/block/sda ]] 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@80 -- # echo 2147483648 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # dev_size=2147483648 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@80 -- # (( lvol_size == dev_size )) 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@81 -- # parted -s /dev/sda mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:38.355 10:08:11 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@82 -- # sleep 1 00:07:38.355 [2024-07-25 10:08:11.598867] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@144 -- # run_test iscsi_tgt_filesystem_ext4 filesystem_test ext4 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.787 ************************************ 00:07:39.787 START TEST iscsi_tgt_filesystem_ext4 00:07:39.787 ************************************ 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1123 -- # filesystem_test ext4 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@89 -- # fstype=ext4 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@91 -- # make_filesystem ext4 /dev/sda1 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda1 00:07:39.787 mke2fs 1.46.5 (30-Dec-2021) 00:07:39.787 Discarding device blocks: 0/522240 done 00:07:39.787 Creating filesystem with 522240 4k blocks and 130560 inodes 00:07:39.787 Filesystem UUID: 15133f6a-066f-49a9-a28a-496cf30b0499 00:07:39.787 Superblock backups stored on blocks: 00:07:39.787 32768, 98304, 163840, 229376, 294912 00:07:39.787 00:07:39.787 Allocating group tables: 0/16 done 00:07:39.787 Writing inode tables: 0/16 done 00:07:39.787 Creating journal (8192 blocks): done 00:07:39.787 Writing superblocks and filesystem accounting information: 0/16 done 00:07:39.787 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@93 -- # '[' 0 -eq 1 ']' 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@119 -- # touch /mnt/device/aaa 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@120 -- # umount /mnt/device 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@122 -- # iscsiadm -m node --logout 00:07:39.787 Logging out of session [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:07:39.787 Logout of [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@123 -- # waitforiscsidevices 0 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=0 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:07:39.787 iscsiadm: No active sessions. 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # true 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=0 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:07:39.787 10:08:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@124 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:07:39.787 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:07:39.787 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:07:39.787 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@125 -- # waitforiscsidevices 1 00:07:39.787 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=1 00:07:39.787 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:07:39.787 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:07:39.788 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:07:39.788 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:07:39.788 [2024-07-25 10:08:13.014564] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:07:39.788 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=1 00:07:39.788 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:07:39.788 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@127 -- # iscsiadm -m session -P 3 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@127 -- # awk '{print $4}' 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@127 -- # grep 'Attached scsi disk' 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@127 -- # dev=sda 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@129 -- # waitforfile /dev/sda1 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1265 -- # local i=0 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1276 -- # return 0 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@130 -- # mount -o rw /dev/sda1 /mnt/device 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@132 -- # '[' -f /mnt/device/aaa ']' 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@133 -- # echo 'File existed.' 00:07:40.046 File existed. 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@139 -- # rm -rf /mnt/device/aaa 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@140 -- # umount /mnt/device 00:07:40.046 00:07:40.046 real 0m0.506s 00:07:40.046 user 0m0.040s 00:07:40.046 sys 0m0.087s 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:40.046 ************************************ 00:07:40.046 END TEST iscsi_tgt_filesystem_ext4 00:07:40.046 ************************************ 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@145 -- # run_test iscsi_tgt_filesystem_btrfs filesystem_test btrfs 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.046 ************************************ 00:07:40.046 START TEST iscsi_tgt_filesystem_btrfs 00:07:40.046 ************************************ 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1123 -- # filesystem_test btrfs 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@89 -- # fstype=btrfs 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@91 -- # make_filesystem btrfs /dev/sda1 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:40.046 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/sda1 00:07:40.046 btrfs-progs v6.6.2 00:07:40.046 See https://btrfs.readthedocs.io for more information. 00:07:40.046 00:07:40.046 Performing full device TRIM /dev/sda1 (1.99GiB) ... 00:07:40.046 NOTE: several default settings have changed in version 5.15, please make sure 00:07:40.046 this does not affect your deployments: 00:07:40.046 - DUP for metadata (-m dup) 00:07:40.046 - enabled no-holes (-O no-holes) 00:07:40.046 - enabled free-space-tree (-R free-space-tree) 00:07:40.046 00:07:40.046 Label: (null) 00:07:40.046 UUID: cfdb848b-a4ea-4065-84f8-e7759d592154 00:07:40.046 Node size: 16384 00:07:40.046 Sector size: 4096 00:07:40.046 Filesystem size: 1.99GiB 00:07:40.046 Block group profiles: 00:07:40.046 Data: single 8.00MiB 00:07:40.046 Metadata: DUP 102.00MiB 00:07:40.046 System: DUP 8.00MiB 00:07:40.046 SSD detected: yes 00:07:40.046 Zoned device: no 00:07:40.046 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:40.046 Runtime features: free-space-tree 00:07:40.046 Checksum: crc32c 00:07:40.046 Number of devices: 1 00:07:40.046 Devices: 00:07:40.046 ID SIZE PATH 00:07:40.046 1 1.99GiB /dev/sda1 00:07:40.046 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@93 -- # '[' 0 -eq 1 ']' 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@119 -- # touch /mnt/device/aaa 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@120 -- # umount /mnt/device 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@122 -- # iscsiadm -m node --logout 00:07:40.303 Logging out of session [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:07:40.303 Logout of [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@123 -- # waitforiscsidevices 0 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:07:40.303 iscsiadm: No active sessions. 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # true 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=0 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@124 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:07:40.303 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:07:40.303 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@125 -- # waitforiscsidevices 1 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:07:40.303 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:07:40.303 [2024-07-25 10:08:13.470926] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=1 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@127 -- # iscsiadm -m session -P 3 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@127 -- # grep 'Attached scsi disk' 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@127 -- # awk '{print $4}' 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@127 -- # dev=sda 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@129 -- # waitforfile /dev/sda1 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1265 -- # local i=0 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1276 -- # return 0 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@130 -- # mount -o rw /dev/sda1 /mnt/device 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@132 -- # '[' -f /mnt/device/aaa ']' 00:07:40.304 File existed. 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@133 -- # echo 'File existed.' 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@139 -- # rm -rf /mnt/device/aaa 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@140 -- # umount /mnt/device 00:07:40.304 00:07:40.304 real 0m0.378s 00:07:40.304 user 0m0.042s 00:07:40.304 sys 0m0.108s 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.304 ************************************ 00:07:40.304 END TEST iscsi_tgt_filesystem_btrfs 00:07:40.304 ************************************ 00:07:40.304 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:40.561 10:08:13 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:40.561 10:08:13 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@146 -- # run_test iscsi_tgt_filesystem_xfs filesystem_test xfs 00:07:40.561 10:08:13 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:40.561 10:08:13 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.561 10:08:13 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.561 ************************************ 00:07:40.561 START TEST iscsi_tgt_filesystem_xfs 00:07:40.561 ************************************ 00:07:40.561 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1123 -- # filesystem_test xfs 00:07:40.561 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@89 -- # fstype=xfs 00:07:40.561 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@91 -- # make_filesystem xfs /dev/sda1 00:07:40.561 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:40.561 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:07:40.561 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:40.561 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:40.561 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:40.561 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:40.561 10:08:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/sda1 00:07:40.561 meta-data=/dev/sda1 isize=512 agcount=4, agsize=130560 blks 00:07:40.561 = sectsz=4096 attr=2, projid32bit=1 00:07:40.561 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:40.561 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:40.561 data = bsize=4096 blocks=522240, imaxpct=25 00:07:40.561 = sunit=0 swidth=0 blks 00:07:40.561 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:40.561 log =internal log bsize=4096 blocks=16384, version=2 00:07:40.561 = sectsz=4096 sunit=1 blks, lazy-count=1 00:07:40.561 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:40.818 Discarding blocks...Done. 00:07:40.818 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:40.818 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:07:41.384 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@93 -- # '[' 0 -eq 1 ']' 00:07:41.384 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@119 -- # touch /mnt/device/aaa 00:07:41.384 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@120 -- # umount /mnt/device 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@122 -- # iscsiadm -m node --logout 00:07:41.641 Logging out of session [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:07:41.641 Logout of [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@123 -- # waitforiscsidevices 0 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:07:41.641 iscsiadm: No active sessions. 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # true 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=0 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@124 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:07:41.641 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:07:41.641 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@125 -- # waitforiscsidevices 1 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:07:41.641 [2024-07-25 10:08:14.800359] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=1 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@127 -- # iscsiadm -m session -P 3 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@127 -- # grep 'Attached scsi disk' 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@127 -- # awk '{print $4}' 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@127 -- # dev=sda 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@129 -- # waitforfile /dev/sda1 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1265 -- # local i=0 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1276 -- # return 0 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@130 -- # mount -o rw /dev/sda1 /mnt/device 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@132 -- # '[' -f /mnt/device/aaa ']' 00:07:41.641 File existed. 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@133 -- # echo 'File existed.' 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@139 -- # rm -rf /mnt/device/aaa 00:07:41.641 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@140 -- # umount /mnt/device 00:07:41.898 00:07:41.899 real 0m1.342s 00:07:41.899 user 0m0.041s 00:07:41.899 sys 0m0.102s 00:07:41.899 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.899 10:08:14 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:41.899 ************************************ 00:07:41.899 END TEST iscsi_tgt_filesystem_xfs 00:07:41.899 ************************************ 00:07:41.899 10:08:14 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:41.899 10:08:14 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@148 -- # rm -rf /mnt/device 00:07:41.899 10:08:14 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:07:41.899 10:08:14 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@152 -- # iscsicleanup 00:07:41.899 Cleaning up iSCSI connection 00:07:41.899 10:08:14 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:07:41.899 10:08:14 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:07:41.899 Logging out of session [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:07:41.899 Logout of [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@983 -- # rm -rf 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@153 -- # remove_backends 00:07:41.899 INFO: Removing lvol bdev 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@17 -- # echo 'INFO: Removing lvol bdev' 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@18 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.899 [2024-07-25 10:08:15.066266] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (f002920f-8254-4f46-aa03-bc730d407fc5) received event(SPDK_BDEV_EVENT_REMOVE) 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.899 INFO: Removing lvol stores 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@20 -- # echo 'INFO: Removing lvol stores' 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@21 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.899 INFO: Removing NVMe 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@23 -- # echo 'INFO: Removing NVMe' 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@24 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@26 -- # return 0 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@154 -- # killprocess 63734 00:07:41.899 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@948 -- # '[' -z 63734 ']' 00:07:42.157 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@952 -- # kill -0 63734 00:07:42.157 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@953 -- # uname 00:07:42.157 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:42.157 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63734 00:07:42.157 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:42.157 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:42.157 killing process with pid 63734 00:07:42.157 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63734' 00:07:42.157 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@967 -- # kill 63734 00:07:42.157 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@972 -- # wait 63734 00:07:42.437 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@155 -- # iscsitestfini 00:07:42.437 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:07:42.437 00:07:42.437 real 0m7.047s 00:07:42.437 user 0m25.676s 00:07:42.437 sys 0m1.366s 00:07:42.437 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.437 10:08:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.437 ************************************ 00:07:42.437 END TEST iscsi_tgt_filesystem 00:07:42.437 ************************************ 00:07:42.437 10:08:15 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:07:42.437 10:08:15 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@32 -- # run_test chap_during_discovery /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:07:42.437 10:08:15 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:42.437 10:08:15 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.437 10:08:15 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:07:42.437 ************************************ 00:07:42.437 START TEST chap_during_discovery 00:07:42.437 ************************************ 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:07:42.437 * Looking for test storage... 00:07:42.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@13 -- # USER=chapo 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@14 -- # MUSER=mchapo 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@15 -- # PASS=123456789123 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@16 -- # MPASS=321978654321 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@19 -- # iscsitestinit 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@21 -- # set_up_iscsi_target 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@142 -- # pid=64173 00:07:42.437 iSCSI target launched. pid: 64173 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 64173' 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@145 -- # waitforlisten 64173 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@829 -- # '[' -z 64173 ']' 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.437 10:08:15 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.695 [2024-07-25 10:08:15.759535] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:07:42.695 [2024-07-25 10:08:15.759645] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64173 ] 00:07:42.953 [2024-07-25 10:08:16.022064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.953 [2024-07-25 10:08:16.095721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.519 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.519 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:43.519 10:08:16 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:07:43.519 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.519 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.519 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.519 10:08:16 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:07:43.519 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.519 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.519 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.519 iscsi_tgt is listening. Running tests... 00:07:43.519 10:08:16 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:07:43.519 10:08:16 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:07:43.519 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.519 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.778 Malloc0 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.778 10:08:16 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@155 -- # sleep 1 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:07:44.713 configuring target for bideerctional authentication 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@24 -- # echo 'configuring target for bideerctional authentication' 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@95 -- # '[' 0 -eq 1 ']' 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 1 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.713 executing discovery without adding credential to initiator - we expect failure 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@27 -- # rc=0 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:07:44.713 iscsiadm: Login failed to authenticate with target 00:07:44.713 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:07:44.713 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # rc=24 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@29 -- # '[' 24 -eq 0 ']' 00:07:44.713 configuring initiator for bideerctional authentication 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@35 -- # echo 'configuring initiator for bideerctional authentication' 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@36 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:07:44.713 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:07:44.714 iscsiadm: No matching sessions found 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:07:44.714 iscsiadm: No records found 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # true 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:07:44.714 10:08:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:07:47.996 10:08:20 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:07:47.996 10:08:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:07:48.937 10:08:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:48.937 10:08:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@116 -- # '[' 0 -eq 1 ']' 00:07:48.937 10:08:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:07:48.937 10:08:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:07:48.937 10:08:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:07:48.938 10:08:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:07:48.938 10:08:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:07:48.938 10:08:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:07:48.938 10:08:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:07:48.938 10:08:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:07:48.938 10:08:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:07:48.938 10:08:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@135 -- # restart_iscsid 00:07:48.938 10:08:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:07:52.220 10:08:25 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:07:52.220 10:08:25 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:07:53.161 executing discovery with adding credential to initiator 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@37 -- # echo 'executing discovery with adding credential to initiator' 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@38 -- # rc=0 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@39 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:07:53.161 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@40 -- # '[' 0 -ne 0 ']' 00:07:53.161 DONE 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@44 -- # echo DONE 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@45 -- # default_initiator_chap_credentials 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:07:53.161 iscsiadm: No matching sessions found 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:07:53.161 10:08:26 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:07:56.444 10:08:29 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:07:56.444 10:08:29 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:07:57.379 10:08:30 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:57.379 10:08:30 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:07:57.379 10:08:30 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@49 -- # killprocess 64173 00:07:57.379 10:08:30 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@948 -- # '[' -z 64173 ']' 00:07:57.379 10:08:30 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@952 -- # kill -0 64173 00:07:57.379 10:08:30 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@953 -- # uname 00:07:57.379 10:08:30 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:57.379 10:08:30 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64173 00:07:57.379 killing process with pid 64173 00:07:57.379 10:08:30 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:57.379 10:08:30 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:57.379 10:08:30 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64173' 00:07:57.379 10:08:30 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@967 -- # kill 64173 00:07:57.379 10:08:30 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@972 -- # wait 64173 00:07:57.637 10:08:30 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@51 -- # iscsitestfini 00:07:57.637 10:08:30 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:07:57.637 00:07:57.637 real 0m15.092s 00:07:57.637 user 0m15.074s 00:07:57.637 sys 0m0.686s 00:07:57.637 10:08:30 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.637 10:08:30 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.637 ************************************ 00:07:57.637 END TEST chap_during_discovery 00:07:57.637 ************************************ 00:07:57.637 10:08:30 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:07:57.638 10:08:30 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@33 -- # run_test chap_mutual_auth /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:07:57.638 10:08:30 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:57.638 10:08:30 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.638 10:08:30 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:07:57.638 ************************************ 00:07:57.638 START TEST chap_mutual_auth 00:07:57.638 ************************************ 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:07:57.638 * Looking for test storage... 00:07:57.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@13 -- # USER=chapo 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@14 -- # MUSER=mchapo 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@15 -- # PASS=123456789123 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@16 -- # MPASS=321978654321 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@19 -- # iscsitestinit 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@21 -- # set_up_iscsi_target 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@142 -- # pid=64444 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 64444' 00:07:57.638 iSCSI target launched. pid: 64444 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@145 -- # waitforlisten 64444 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@829 -- # '[' -z 64444 ']' 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.638 10:08:30 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:07:57.902 [2024-07-25 10:08:30.895859] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:07:57.902 [2024-07-25 10:08:30.895959] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64444 ] 00:07:58.167 [2024-07-25 10:08:31.159313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.167 [2024-07-25 10:08:31.236783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.732 10:08:31 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.732 10:08:31 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@862 -- # return 0 00:07:58.732 10:08:31 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:07:58.732 10:08:31 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.732 10:08:31 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:07:58.733 10:08:31 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.733 10:08:31 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:07:58.733 10:08:31 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.733 10:08:31 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:07:58.733 10:08:31 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.733 iscsi_tgt is listening. Running tests... 00:07:58.733 10:08:31 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:07:58.733 10:08:31 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:07:58.733 10:08:31 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.733 10:08:31 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:07:58.991 Malloc0 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.991 10:08:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@155 -- # sleep 1 00:07:59.926 configuring target for authentication 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@24 -- # echo 'configuring target for authentication' 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 0 -eq 1 ']' 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@99 -- # rpc_cmd iscsi_target_node_set_auth -g 1 -r iqn.2016-06.io.spdk:disk1 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 0 -eq 1 ']' 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@106 -- # rpc_cmd iscsi_set_discovery_auth -r -g 1 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:07:59.926 executing discovery without adding credential to initiator - we expect failure 00:07:59.926 configuring initiator with biderectional authentication 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@28 -- # echo 'configuring initiator with biderectional authentication' 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@29 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:59.926 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:07:59.927 iscsiadm: No matching sessions found 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # true 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:07:59.927 iscsiadm: No records found 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # true 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:07:59.927 10:08:33 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:08:03.212 10:08:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:08:03.212 10:08:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@116 -- # '[' 1 -eq 1 ']' 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@117 -- # sed -i 's/#node.session.auth.authmethod = CHAP/node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@118 -- # sed -i 's/#node.session.auth.username =.*/node.session.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@119 -- # sed -i 's/#node.session.auth.password =.*/node.session.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' 1 -eq 1 ']' 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n 321978654321 ']' 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n mchapo ']' 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@121 -- # sed -i 's/#node.session.auth.username_in =.*/node.session.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@122 -- # sed -i 's/#node.session.auth.password_in =.*/node.session.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@135 -- # restart_iscsid 00:08:04.146 10:08:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:08:07.426 10:08:40 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:08:07.426 10:08:40 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:08:08.371 executing discovery - target should not be discovered since the -m option was not used 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@30 -- # echo 'executing discovery - target should not be discovered since the -m option was not used' 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@31 -- # rc=0 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:08:08.372 [2024-07-25 10:08:41.390998] iscsi.c: 982:iscsi_auth_params: *ERROR*: Initiator wants to use mutual CHAP for security, but it's not enabled. 00:08:08.372 [2024-07-25 10:08:41.391039] iscsi.c:1957:iscsi_op_login_rsp_handle_csg_bit: *ERROR*: iscsi_auth_params() failed 00:08:08.372 iscsiadm: Login failed to authenticate with target 00:08:08.372 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:08:08.372 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:08:08.372 configuring target for authentication with the -m option 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # rc=24 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@33 -- # '[' 24 -eq 0 ']' 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@37 -- # echo 'configuring target for authentication with the -m option' 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@38 -- # config_chap_credentials_for_target -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=2 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 2 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 2 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 1 -eq 1 ']' 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@97 -- # rpc_cmd iscsi_target_node_set_auth -g 2 -r -m iqn.2016-06.io.spdk:disk1 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 2 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.372 executing discovery: 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@39 -- # echo 'executing discovery:' 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@40 -- # rc=0 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@41 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:08:08.372 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:08:08.372 executing login: 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@42 -- # '[' 0 -ne 0 ']' 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@46 -- # echo 'executing login:' 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@47 -- # rc=0 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@48 -- # iscsiadm -m node -l -p 10.0.0.1:3260 00:08:08.372 Logging in to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:08:08.372 Login to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:08:08.372 DONE 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@49 -- # '[' 0 -ne 0 ']' 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@54 -- # echo DONE 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@55 -- # default_initiator_chap_credentials 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:08:08.372 [2024-07-25 10:08:41.505363] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:08.372 Logging out of session [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:08:08.372 Logout of [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:08:08.372 10:08:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:08:11.656 10:08:44 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:08:11.656 10:08:44 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:08:12.591 10:08:45 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:12.591 10:08:45 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@57 -- # trap - SIGINT SIGTERM EXIT 00:08:12.591 10:08:45 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@59 -- # killprocess 64444 00:08:12.591 10:08:45 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@948 -- # '[' -z 64444 ']' 00:08:12.591 10:08:45 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@952 -- # kill -0 64444 00:08:12.591 10:08:45 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@953 -- # uname 00:08:12.591 10:08:45 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:12.591 10:08:45 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64444 00:08:12.591 10:08:45 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:12.591 10:08:45 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:12.591 killing process with pid 64444 00:08:12.591 10:08:45 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64444' 00:08:12.591 10:08:45 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@967 -- # kill 64444 00:08:12.591 10:08:45 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@972 -- # wait 64444 00:08:12.848 10:08:46 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@61 -- # iscsitestfini 00:08:12.848 10:08:46 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:08:12.848 00:08:12.848 real 0m15.321s 00:08:12.848 user 0m15.389s 00:08:12.848 sys 0m0.712s 00:08:12.848 10:08:46 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.848 10:08:46 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:12.848 ************************************ 00:08:12.848 END TEST chap_mutual_auth 00:08:12.848 ************************************ 00:08:12.848 10:08:46 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:08:12.848 10:08:46 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@34 -- # run_test iscsi_tgt_reset /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:08:12.848 10:08:46 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:12.848 10:08:46 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.848 10:08:46 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:08:12.848 ************************************ 00:08:12.848 START TEST iscsi_tgt_reset 00:08:12.848 ************************************ 00:08:12.848 10:08:46 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:08:13.107 * Looking for test storage... 00:08:13.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@11 -- # iscsitestinit 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@18 -- # hash sg_reset 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@22 -- # timing_enter start_iscsi_tgt 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@25 -- # pid=64741 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@26 -- # echo 'Process pid: 64741' 00:08:13.107 Process pid: 64741 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@28 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@30 -- # waitforlisten 64741 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@829 -- # '[' -z 64741 ']' 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:13.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:13.107 10:08:46 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:13.107 [2024-07-25 10:08:46.281348] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:08:13.107 [2024-07-25 10:08:46.281495] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64741 ] 00:08:13.366 [2024-07-25 10:08:46.429908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.366 [2024-07-25 10:08:46.531933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@862 -- # return 0 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@31 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@32 -- # rpc_cmd framework_start_init 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.299 iscsi_tgt is listening. Running tests... 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@33 -- # echo 'iscsi_tgt is listening. Running tests...' 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@35 -- # timing_exit start_iscsi_tgt 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@37 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@38 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@39 -- # rpc_cmd bdev_malloc_create 64 512 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:14.299 Malloc0 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@44 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.299 10:08:47 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@45 -- # sleep 1 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@47 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:08:15.675 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@48 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:08:15.675 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:08:15.675 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@49 -- # waitforiscsidevices 1 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@116 -- # local num=1 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:15.675 [2024-07-25 10:08:48.561204] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # n=1 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@123 -- # return 0 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # iscsiadm -m session -P 3 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # grep 'Attached scsi disk' 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # awk '{print $4}' 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # dev=sda 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@54 -- # fiopid=64809 00:08:15.675 FIO pid: 64809 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@55 -- # echo 'FIO pid: 64809' 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@57 -- # trap 'iscsicleanup; killprocess $pid; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 60 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:08:15.675 10:08:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:08:15.675 [global] 00:08:15.675 thread=1 00:08:15.675 invalidate=1 00:08:15.675 rw=read 00:08:15.675 time_based=1 00:08:15.675 runtime=60 00:08:15.675 ioengine=libaio 00:08:15.675 direct=1 00:08:15.675 bs=512 00:08:15.675 iodepth=1 00:08:15.675 norandommap=1 00:08:15.675 numjobs=1 00:08:15.675 00:08:15.675 [job0] 00:08:15.675 filename=/dev/sda 00:08:15.675 queue_depth set to 113 (sda) 00:08:15.675 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:08:15.675 fio-3.35 00:08:15.675 Starting 1 thread 00:08:16.610 10:08:49 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 64741 00:08:16.610 10:08:49 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 64809 00:08:16.610 10:08:49 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:08:16.610 [2024-07-25 10:08:49.583623] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:08:16.610 [2024-07-25 10:08:49.583681] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:08:16.610 10:08:49 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:08:16.610 [2024-07-25 10:08:49.586060] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:17.552 10:08:50 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 64741 00:08:17.552 10:08:50 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 64809 00:08:17.552 10:08:50 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:08:17.552 10:08:50 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:08:18.484 10:08:51 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 64741 00:08:18.484 10:08:51 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 64809 00:08:18.484 10:08:51 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:08:18.484 [2024-07-25 10:08:51.596272] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:08:18.484 [2024-07-25 10:08:51.596337] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:08:18.484 [2024-07-25 10:08:51.597225] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:18.484 10:08:51 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:08:19.419 10:08:52 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 64741 00:08:19.419 10:08:52 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 64809 00:08:19.419 10:08:52 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:08:19.419 10:08:52 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:08:20.352 10:08:53 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 64741 00:08:20.610 10:08:53 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 64809 00:08:20.610 10:08:53 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:08:20.610 [2024-07-25 10:08:53.612018] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:08:20.610 [2024-07-25 10:08:53.612083] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:08:20.610 10:08:53 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:08:20.610 [2024-07-25 10:08:53.613698] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:21.544 10:08:54 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 64741 00:08:21.544 10:08:54 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 64809 00:08:21.544 10:08:54 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@70 -- # kill 64809 00:08:21.544 10:08:54 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # wait 64809 00:08:21.544 10:08:54 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # true 00:08:21.544 10:08:54 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@73 -- # trap - SIGINT SIGTERM EXIT 00:08:21.544 10:08:54 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@75 -- # iscsicleanup 00:08:21.544 Cleaning up iSCSI connection 00:08:21.544 10:08:54 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:08:21.544 10:08:54 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:08:21.544 fio: io_u error on file /dev/sda: No such device: read offset=52643328, buflen=512 00:08:21.544 fio: pid=64835, err=19/file:io_u.c:1889, func=io_u error, error=No such device 00:08:21.544 00:08:21.544 job0: (groupid=0, jobs=1): err=19 (file:io_u.c:1889, func=io_u error, error=No such device): pid=64835: Thu Jul 25 10:08:54 2024 00:08:21.544 read: IOPS=17.9k, BW=8964KiB/s (9179kB/s)(50.2MiB/5735msec) 00:08:21.544 slat (usec): min=3, max=1512, avg= 5.62, stdev= 5.91 00:08:21.544 clat (nsec): min=1525, max=1843.0k, avg=49698.27, stdev=11874.36 00:08:21.544 lat (usec): min=47, max=1849, avg=55.30, stdev=12.77 00:08:21.544 clat percentiles (usec): 00:08:21.544 | 1.00th=[ 46], 5.00th=[ 46], 10.00th=[ 46], 20.00th=[ 47], 00:08:21.544 | 30.00th=[ 47], 40.00th=[ 47], 50.00th=[ 49], 60.00th=[ 49], 00:08:21.544 | 70.00th=[ 50], 80.00th=[ 52], 90.00th=[ 57], 95.00th=[ 59], 00:08:21.544 | 99.00th=[ 70], 99.50th=[ 83], 99.90th=[ 161], 99.95th=[ 196], 00:08:21.544 | 99.99th=[ 424] 00:08:21.544 bw ( KiB/s): min= 7855, max= 9390, per=100.00%, avg=8986.09, stdev=431.52, samples=11 00:08:21.544 iops : min=15710, max=18780, avg=17972.18, stdev=863.05, samples=11 00:08:21.544 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=74.21% 00:08:21.544 lat (usec) : 100=25.56%, 250=0.19%, 500=0.02%, 750=0.01%, 1000=0.01% 00:08:21.544 lat (msec) : 2=0.01% 00:08:21.544 cpu : usr=4.26%, sys=15.36%, ctx=102854, majf=0, minf=2 00:08:21.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:21.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:21.544 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:21.544 issued rwts: total=102820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:21.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:21.544 00:08:21.544 Run status group 0 (all jobs): 00:08:21.544 READ: bw=8964KiB/s (9179kB/s), 8964KiB/s-8964KiB/s (9179kB/s-9179kB/s), io=50.2MiB (52.6MB), run=5735-5735msec 00:08:21.544 00:08:21.544 Disk stats (read/write): 00:08:21.544 sda: ios=101273/0, merge=0/0, ticks=4869/0, in_queue=4869, util=98.39% 00:08:21.544 Logging out of session [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:08:21.544 Logout of [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:08:21.545 10:08:54 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:08:21.545 10:08:54 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@983 -- # rm -rf 00:08:21.545 10:08:54 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@76 -- # killprocess 64741 00:08:21.545 10:08:54 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@948 -- # '[' -z 64741 ']' 00:08:21.545 10:08:54 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@952 -- # kill -0 64741 00:08:21.545 10:08:54 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@953 -- # uname 00:08:21.545 10:08:54 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:21.545 10:08:54 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64741 00:08:21.545 10:08:54 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:21.545 10:08:54 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:21.545 killing process with pid 64741 00:08:21.545 10:08:54 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64741' 00:08:21.545 10:08:54 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@967 -- # kill 64741 00:08:21.545 10:08:54 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@972 -- # wait 64741 00:08:22.109 10:08:55 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@77 -- # iscsitestfini 00:08:22.109 10:08:55 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:08:22.109 00:08:22.109 real 0m8.968s 00:08:22.109 user 0m6.671s 00:08:22.109 sys 0m2.076s 00:08:22.109 10:08:55 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.109 10:08:55 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:22.109 ************************************ 00:08:22.110 END TEST iscsi_tgt_reset 00:08:22.110 ************************************ 00:08:22.110 10:08:55 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:08:22.110 10:08:55 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@35 -- # run_test iscsi_tgt_rpc_config /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:08:22.110 10:08:55 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:22.110 10:08:55 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.110 10:08:55 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:08:22.110 ************************************ 00:08:22.110 START TEST iscsi_tgt_rpc_config 00:08:22.110 ************************************ 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:08:22.110 * Looking for test storage... 00:08:22.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@11 -- # iscsitestinit 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@16 -- # rpc_config_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@18 -- # timing_enter start_iscsi_tgt 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@21 -- # pid=64979 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@20 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:08:22.110 Process pid: 64979 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@22 -- # echo 'Process pid: 64979' 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@24 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@26 -- # waitforlisten 64979 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@829 -- # '[' -z 64979 ']' 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.110 10:08:55 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:08:22.110 [2024-07-25 10:08:55.312045] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:08:22.110 [2024-07-25 10:08:55.312150] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64979 ] 00:08:22.367 [2024-07-25 10:08:55.455858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.367 [2024-07-25 10:08:55.551277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.300 10:08:56 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.300 10:08:56 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@862 -- # return 0 00:08:23.300 10:08:56 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:08:23.300 10:08:56 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@28 -- # rpc_wait_pid=64995 00:08:23.300 10:08:56 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:08:23.300 10:08:56 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@32 -- # ps 64995 00:08:23.300 PID TTY STAT TIME COMMAND 00:08:23.300 64995 ? S 0:00 python3 /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:08:23.300 10:08:56 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:08:23.864 10:08:56 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@35 -- # sleep 1 00:08:24.798 iscsi_tgt is listening. Running tests... 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@36 -- # echo 'iscsi_tgt is listening. Running tests...' 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@39 -- # NOT ps 64995 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@648 -- # local es=0 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # valid_exec_arg ps 64995 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@636 -- # local arg=ps 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # type -t ps 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -P ps 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # arg=/usr/bin/ps 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/ps ]] 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # ps 64995 00:08:24.798 PID TTY STAT TIME COMMAND 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # es=1 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@43 -- # rpc_wait_pid=65020 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:08:24.798 10:08:57 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@44 -- # sleep 1 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@45 -- # NOT ps 65020 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@648 -- # local es=0 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # valid_exec_arg ps 65020 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@636 -- # local arg=ps 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # type -t ps 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -P ps 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # arg=/usr/bin/ps 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/ps ]] 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # ps 65020 00:08:25.735 PID TTY STAT TIME COMMAND 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # es=1 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@47 -- # timing_exit start_iscsi_tgt 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.735 10:08:58 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:08:25.993 10:08:59 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@49 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py /home/vagrant/spdk_repo/spdk/scripts/rpc.py 10.0.0.1 10.0.0.2 3260 10.0.0.2/32 spdk_iscsi_ns 00:08:47.918 [2024-07-25 10:09:18.975201] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:48.484 [2024-07-25 10:09:21.464875] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:49.857 verify_log_flag_rpc_methods passed 00:08:49.857 create_malloc_bdevs_rpc_methods passed 00:08:49.857 verify_portal_groups_rpc_methods passed 00:08:49.857 verify_initiator_groups_rpc_method passed. 00:08:49.857 This issue will be fixed later. 00:08:49.857 verify_target_nodes_rpc_methods passed. 00:08:49.857 verify_scsi_devices_rpc_methods passed 00:08:49.857 verify_iscsi_connection_rpc_methods passed 00:08:49.857 10:09:22 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:08:49.857 [ 00:08:49.857 { 00:08:49.857 "name": "Malloc0", 00:08:49.857 "aliases": [ 00:08:49.857 "d5c8fe0f-8666-4f83-bf7d-8f7c71fb30c7" 00:08:49.857 ], 00:08:49.857 "product_name": "Malloc disk", 00:08:49.857 "block_size": 512, 00:08:49.857 "num_blocks": 131072, 00:08:49.857 "uuid": "d5c8fe0f-8666-4f83-bf7d-8f7c71fb30c7", 00:08:49.857 "assigned_rate_limits": { 00:08:49.857 "rw_ios_per_sec": 0, 00:08:49.857 "rw_mbytes_per_sec": 0, 00:08:49.857 "r_mbytes_per_sec": 0, 00:08:49.857 "w_mbytes_per_sec": 0 00:08:49.857 }, 00:08:49.857 "claimed": false, 00:08:49.857 "zoned": false, 00:08:49.857 "supported_io_types": { 00:08:49.857 "read": true, 00:08:49.857 "write": true, 00:08:49.857 "unmap": true, 00:08:49.857 "flush": true, 00:08:49.857 "reset": true, 00:08:49.857 "nvme_admin": false, 00:08:49.857 "nvme_io": false, 00:08:49.857 "nvme_io_md": false, 00:08:49.857 "write_zeroes": true, 00:08:49.857 "zcopy": true, 00:08:49.857 "get_zone_info": false, 00:08:49.857 "zone_management": false, 00:08:49.857 "zone_append": false, 00:08:49.857 "compare": false, 00:08:49.857 "compare_and_write": false, 00:08:49.857 "abort": true, 00:08:49.857 "seek_hole": false, 00:08:49.857 "seek_data": false, 00:08:49.857 "copy": true, 00:08:49.857 "nvme_iov_md": false 00:08:49.857 }, 00:08:49.857 "memory_domains": [ 00:08:49.857 { 00:08:49.857 "dma_device_id": "system", 00:08:49.857 "dma_device_type": 1 00:08:49.857 }, 00:08:49.857 { 00:08:49.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.857 "dma_device_type": 2 00:08:49.857 } 00:08:49.857 ], 00:08:49.857 "driver_specific": {} 00:08:49.857 }, 00:08:49.857 { 00:08:49.857 "name": "Malloc1", 00:08:49.857 "aliases": [ 00:08:49.857 "012f8675-102a-4618-a474-8fa702eab8d7" 00:08:49.857 ], 00:08:49.857 "product_name": "Malloc disk", 00:08:49.857 "block_size": 512, 00:08:49.857 "num_blocks": 131072, 00:08:49.857 "uuid": "012f8675-102a-4618-a474-8fa702eab8d7", 00:08:49.857 "assigned_rate_limits": { 00:08:49.857 "rw_ios_per_sec": 0, 00:08:49.857 "rw_mbytes_per_sec": 0, 00:08:49.857 "r_mbytes_per_sec": 0, 00:08:49.857 "w_mbytes_per_sec": 0 00:08:49.857 }, 00:08:49.857 "claimed": false, 00:08:49.857 "zoned": false, 00:08:49.857 "supported_io_types": { 00:08:49.857 "read": true, 00:08:49.857 "write": true, 00:08:49.857 "unmap": true, 00:08:49.857 "flush": true, 00:08:49.857 "reset": true, 00:08:49.857 "nvme_admin": false, 00:08:49.857 "nvme_io": false, 00:08:49.857 "nvme_io_md": false, 00:08:49.857 "write_zeroes": true, 00:08:49.857 "zcopy": true, 00:08:49.857 "get_zone_info": false, 00:08:49.857 "zone_management": false, 00:08:49.857 "zone_append": false, 00:08:49.857 "compare": false, 00:08:49.857 "compare_and_write": false, 00:08:49.857 "abort": true, 00:08:49.857 "seek_hole": false, 00:08:49.857 "seek_data": false, 00:08:49.857 "copy": true, 00:08:49.857 "nvme_iov_md": false 00:08:49.857 }, 00:08:49.857 "memory_domains": [ 00:08:49.857 { 00:08:49.857 "dma_device_id": "system", 00:08:49.857 "dma_device_type": 1 00:08:49.857 }, 00:08:49.857 { 00:08:49.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.857 "dma_device_type": 2 00:08:49.857 } 00:08:49.857 ], 00:08:49.857 "driver_specific": {} 00:08:49.857 }, 00:08:49.857 { 00:08:49.857 "name": "Malloc2", 00:08:49.857 "aliases": [ 00:08:49.858 "604d84fb-a7c4-4006-a939-cbfcf0f9af10" 00:08:49.858 ], 00:08:49.858 "product_name": "Malloc disk", 00:08:49.858 "block_size": 512, 00:08:49.858 "num_blocks": 131072, 00:08:49.858 "uuid": "604d84fb-a7c4-4006-a939-cbfcf0f9af10", 00:08:49.858 "assigned_rate_limits": { 00:08:49.858 "rw_ios_per_sec": 0, 00:08:49.858 "rw_mbytes_per_sec": 0, 00:08:49.858 "r_mbytes_per_sec": 0, 00:08:49.858 "w_mbytes_per_sec": 0 00:08:49.858 }, 00:08:49.858 "claimed": false, 00:08:49.858 "zoned": false, 00:08:49.858 "supported_io_types": { 00:08:49.858 "read": true, 00:08:49.858 "write": true, 00:08:49.858 "unmap": true, 00:08:49.858 "flush": true, 00:08:49.858 "reset": true, 00:08:49.858 "nvme_admin": false, 00:08:49.858 "nvme_io": false, 00:08:49.858 "nvme_io_md": false, 00:08:49.858 "write_zeroes": true, 00:08:49.858 "zcopy": true, 00:08:49.858 "get_zone_info": false, 00:08:49.858 "zone_management": false, 00:08:49.858 "zone_append": false, 00:08:49.858 "compare": false, 00:08:49.858 "compare_and_write": false, 00:08:49.858 "abort": true, 00:08:49.858 "seek_hole": false, 00:08:49.858 "seek_data": false, 00:08:49.858 "copy": true, 00:08:49.858 "nvme_iov_md": false 00:08:49.858 }, 00:08:49.858 "memory_domains": [ 00:08:49.858 { 00:08:49.858 "dma_device_id": "system", 00:08:49.858 "dma_device_type": 1 00:08:49.858 }, 00:08:49.858 { 00:08:49.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.858 "dma_device_type": 2 00:08:49.858 } 00:08:49.858 ], 00:08:49.858 "driver_specific": {} 00:08:49.858 }, 00:08:49.858 { 00:08:49.858 "name": "Malloc3", 00:08:49.858 "aliases": [ 00:08:49.858 "0587527c-29fb-42a3-b3b4-bbd932074a88" 00:08:49.858 ], 00:08:49.858 "product_name": "Malloc disk", 00:08:49.858 "block_size": 512, 00:08:49.858 "num_blocks": 131072, 00:08:49.858 "uuid": "0587527c-29fb-42a3-b3b4-bbd932074a88", 00:08:49.858 "assigned_rate_limits": { 00:08:49.858 "rw_ios_per_sec": 0, 00:08:49.858 "rw_mbytes_per_sec": 0, 00:08:49.858 "r_mbytes_per_sec": 0, 00:08:49.858 "w_mbytes_per_sec": 0 00:08:49.858 }, 00:08:49.858 "claimed": false, 00:08:49.858 "zoned": false, 00:08:49.858 "supported_io_types": { 00:08:49.858 "read": true, 00:08:49.858 "write": true, 00:08:49.858 "unmap": true, 00:08:49.858 "flush": true, 00:08:49.858 "reset": true, 00:08:49.858 "nvme_admin": false, 00:08:49.858 "nvme_io": false, 00:08:49.858 "nvme_io_md": false, 00:08:49.858 "write_zeroes": true, 00:08:49.858 "zcopy": true, 00:08:49.858 "get_zone_info": false, 00:08:49.858 "zone_management": false, 00:08:49.858 "zone_append": false, 00:08:49.858 "compare": false, 00:08:49.858 "compare_and_write": false, 00:08:49.858 "abort": true, 00:08:49.858 "seek_hole": false, 00:08:49.858 "seek_data": false, 00:08:49.858 "copy": true, 00:08:49.858 "nvme_iov_md": false 00:08:49.858 }, 00:08:49.858 "memory_domains": [ 00:08:49.858 { 00:08:49.858 "dma_device_id": "system", 00:08:49.858 "dma_device_type": 1 00:08:49.858 }, 00:08:49.858 { 00:08:49.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.858 "dma_device_type": 2 00:08:49.858 } 00:08:49.858 ], 00:08:49.858 "driver_specific": {} 00:08:49.858 }, 00:08:49.858 { 00:08:49.858 "name": "Malloc4", 00:08:49.858 "aliases": [ 00:08:49.858 "e96b6966-ec37-476c-9a70-58d868f30ac7" 00:08:49.858 ], 00:08:49.858 "product_name": "Malloc disk", 00:08:49.858 "block_size": 512, 00:08:49.858 "num_blocks": 131072, 00:08:49.858 "uuid": "e96b6966-ec37-476c-9a70-58d868f30ac7", 00:08:49.858 "assigned_rate_limits": { 00:08:49.858 "rw_ios_per_sec": 0, 00:08:49.858 "rw_mbytes_per_sec": 0, 00:08:49.858 "r_mbytes_per_sec": 0, 00:08:49.858 "w_mbytes_per_sec": 0 00:08:49.858 }, 00:08:49.858 "claimed": false, 00:08:49.858 "zoned": false, 00:08:49.858 "supported_io_types": { 00:08:49.858 "read": true, 00:08:49.858 "write": true, 00:08:49.858 "unmap": true, 00:08:49.858 "flush": true, 00:08:49.858 "reset": true, 00:08:49.858 "nvme_admin": false, 00:08:49.858 "nvme_io": false, 00:08:49.858 "nvme_io_md": false, 00:08:49.858 "write_zeroes": true, 00:08:49.858 "zcopy": true, 00:08:49.858 "get_zone_info": false, 00:08:49.858 "zone_management": false, 00:08:49.858 "zone_append": false, 00:08:49.858 "compare": false, 00:08:49.858 "compare_and_write": false, 00:08:49.858 "abort": true, 00:08:49.858 "seek_hole": false, 00:08:49.858 "seek_data": false, 00:08:49.858 "copy": true, 00:08:49.858 "nvme_iov_md": false 00:08:49.858 }, 00:08:49.858 "memory_domains": [ 00:08:49.858 { 00:08:49.858 "dma_device_id": "system", 00:08:49.858 "dma_device_type": 1 00:08:49.858 }, 00:08:49.858 { 00:08:49.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.858 "dma_device_type": 2 00:08:49.858 } 00:08:49.858 ], 00:08:49.858 "driver_specific": {} 00:08:49.858 }, 00:08:49.858 { 00:08:49.858 "name": "Malloc5", 00:08:49.858 "aliases": [ 00:08:49.858 "2c6dcea2-d276-4fa4-a881-e3c8a68b00d6" 00:08:49.858 ], 00:08:49.858 "product_name": "Malloc disk", 00:08:49.858 "block_size": 512, 00:08:49.858 "num_blocks": 131072, 00:08:49.858 "uuid": "2c6dcea2-d276-4fa4-a881-e3c8a68b00d6", 00:08:49.858 "assigned_rate_limits": { 00:08:49.858 "rw_ios_per_sec": 0, 00:08:49.858 "rw_mbytes_per_sec": 0, 00:08:49.858 "r_mbytes_per_sec": 0, 00:08:49.858 "w_mbytes_per_sec": 0 00:08:49.858 }, 00:08:49.858 "claimed": false, 00:08:49.858 "zoned": false, 00:08:49.858 "supported_io_types": { 00:08:49.858 "read": true, 00:08:49.858 "write": true, 00:08:49.858 "unmap": true, 00:08:49.858 "flush": true, 00:08:49.858 "reset": true, 00:08:49.858 "nvme_admin": false, 00:08:49.858 "nvme_io": false, 00:08:49.858 "nvme_io_md": false, 00:08:49.858 "write_zeroes": true, 00:08:49.858 "zcopy": true, 00:08:49.858 "get_zone_info": false, 00:08:49.858 "zone_management": false, 00:08:49.858 "zone_append": false, 00:08:49.858 "compare": false, 00:08:49.858 "compare_and_write": false, 00:08:49.858 "abort": true, 00:08:49.858 "seek_hole": false, 00:08:49.858 "seek_data": false, 00:08:49.858 "copy": true, 00:08:49.858 "nvme_iov_md": false 00:08:49.858 }, 00:08:49.858 "memory_domains": [ 00:08:49.858 { 00:08:49.858 "dma_device_id": "system", 00:08:49.858 "dma_device_type": 1 00:08:49.858 }, 00:08:49.858 { 00:08:49.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.858 "dma_device_type": 2 00:08:49.858 } 00:08:49.858 ], 00:08:49.858 "driver_specific": {} 00:08:49.858 } 00:08:49.858 ] 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@53 -- # trap - SIGINT SIGTERM EXIT 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@55 -- # iscsicleanup 00:08:49.858 Cleaning up iSCSI connection 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:08:49.858 iscsiadm: No matching sessions found 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@981 -- # true 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:08:49.858 iscsiadm: No records found 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # true 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@983 -- # rm -rf 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@56 -- # killprocess 64979 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@948 -- # '[' -z 64979 ']' 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@952 -- # kill -0 64979 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@953 -- # uname 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64979 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:49.858 killing process with pid 64979 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64979' 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@967 -- # kill 64979 00:08:49.858 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@972 -- # wait 64979 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@58 -- # iscsitestfini 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:08:50.426 00:08:50.426 real 0m28.374s 00:08:50.426 user 0m48.172s 00:08:50.426 sys 0m4.323s 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.426 ************************************ 00:08:50.426 END TEST iscsi_tgt_rpc_config 00:08:50.426 ************************************ 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:08:50.426 10:09:23 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:08:50.426 10:09:23 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@36 -- # run_test iscsi_tgt_iscsi_lvol /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:08:50.426 10:09:23 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:50.426 10:09:23 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.426 10:09:23 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:08:50.426 ************************************ 00:08:50.426 START TEST iscsi_tgt_iscsi_lvol 00:08:50.426 ************************************ 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:08:50.426 * Looking for test storage... 00:08:50.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@11 -- # iscsitestinit 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@13 -- # MALLOC_BDEV_SIZE=128 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@15 -- # '[' 0 -eq 1 ']' 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@19 -- # NUM_LVS=2 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@20 -- # NUM_LVOL=2 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@23 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@24 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@26 -- # timing_enter start_iscsi_tgt 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@29 -- # pid=65526 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@28 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:08:50.426 Process pid: 65526 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@30 -- # echo 'Process pid: 65526' 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@32 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@34 -- # waitforlisten 65526 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@829 -- # '[' -z 65526 ']' 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.426 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.427 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.427 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.427 10:09:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:50.685 [2024-07-25 10:09:23.752810] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:08:50.685 [2024-07-25 10:09:23.752936] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65526 ] 00:08:50.685 [2024-07-25 10:09:23.897587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.943 [2024-07-25 10:09:23.994368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.943 [2024-07-25 10:09:23.994512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.943 [2024-07-25 10:09:23.994695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.943 [2024-07-25 10:09:23.994696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.509 10:09:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.509 10:09:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:51.509 10:09:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:08:51.768 10:09:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:08:52.026 iscsi_tgt is listening. Running tests... 00:08:52.026 10:09:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@37 -- # echo 'iscsi_tgt is listening. Running tests...' 00:08:52.026 10:09:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@39 -- # timing_exit start_iscsi_tgt 00:08:52.026 10:09:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:52.026 10:09:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:52.026 10:09:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@41 -- # timing_enter setup 00:08:52.026 10:09:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:52.026 10:09:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:52.026 10:09:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:08:52.284 10:09:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # seq 1 2 00:08:52.284 10:09:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:08:52.284 10:09:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=3 00:08:52.284 10:09:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 3 ANY 10.0.0.2/32 00:08:52.543 10:09:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 1 -eq 1 ']' 00:08:52.543 10:09:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:08:52.801 10:09:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # malloc_bdevs='Malloc0 ' 00:08:52.801 10:09:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:08:53.082 10:09:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # malloc_bdevs+=Malloc1 00:08:53.082 10:09:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:53.082 10:09:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@53 -- # bdev=raid0 00:08:53.082 10:09:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs_1 -c 1048576 00:08:53.666 10:09:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=32b49f68-4472-4de1-85b6-335bc336b5dc 00:08:53.666 10:09:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:08:53.666 10:09:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 2 00:08:53.666 10:09:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:08:53.666 10:09:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 32b49f68-4472-4de1-85b6-335bc336b5dc lbd_1 10 00:08:53.666 10:09:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=385f4b1d-54bf-43e6-a621-92c56cdf5251 00:08:53.666 10:09:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='385f4b1d-54bf-43e6-a621-92c56cdf5251:0 ' 00:08:53.666 10:09:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:08:53.666 10:09:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 32b49f68-4472-4de1-85b6-335bc336b5dc lbd_2 10 00:08:53.923 10:09:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=01446bf9-1466-493e-bf1a-1cfb8dfa05c6 00:08:53.923 10:09:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='01446bf9-1466-493e-bf1a-1cfb8dfa05c6:1 ' 00:08:53.924 10:09:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias '385f4b1d-54bf-43e6-a621-92c56cdf5251:0 01446bf9-1466-493e-bf1a-1cfb8dfa05c6:1 ' 1:3 256 -d 00:08:54.193 10:09:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:08:54.193 10:09:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=4 00:08:54.193 10:09:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 4 ANY 10.0.0.2/32 00:08:54.452 10:09:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 2 -eq 1 ']' 00:08:54.452 10:09:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:08:54.709 10:09:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc2 00:08:54.709 10:09:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc2 lvs_2 -c 1048576 00:08:54.967 10:09:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=b5e6c40c-e92a-43a6-83be-07db75e7a9b0 00:08:54.967 10:09:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:08:54.967 10:09:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 2 00:08:54.967 10:09:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:08:54.967 10:09:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b5e6c40c-e92a-43a6-83be-07db75e7a9b0 lbd_1 10 00:08:54.967 10:09:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a3ed6388-2720-4c97-9375-1c25d159bed1 00:08:54.967 10:09:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a3ed6388-2720-4c97-9375-1c25d159bed1:0 ' 00:08:54.967 10:09:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:08:54.967 10:09:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b5e6c40c-e92a-43a6-83be-07db75e7a9b0 lbd_2 10 00:08:55.225 10:09:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7bd01791-f62b-4c70-9d7d-10aaa759a723 00:08:55.225 10:09:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7bd01791-f62b-4c70-9d7d-10aaa759a723:1 ' 00:08:55.225 10:09:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias 'a3ed6388-2720-4c97-9375-1c25d159bed1:0 7bd01791-f62b-4c70-9d7d-10aaa759a723:1 ' 1:4 256 -d 00:08:55.482 10:09:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@66 -- # timing_exit setup 00:08:55.482 10:09:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:55.482 10:09:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:55.482 10:09:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@68 -- # sleep 1 00:08:56.414 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@70 -- # timing_enter discovery 00:08:56.414 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:56.414 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:56.414 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@71 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:08:56.414 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:08:56.414 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:08:56.414 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@72 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:08:56.414 [2024-07-25 10:09:29.667691] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:56.414 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:56.414 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:08:56.414 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:56.414 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@73 -- # waitforiscsidevices 4 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@116 -- # local num=4 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:56.672 [2024-07-25 10:09:29.686423] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:56.672 [2024-07-25 10:09:29.686444] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:56.672 [2024-07-25 10:09:29.696229] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # n=3 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@120 -- # '[' 3 -ne 4 ']' 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@121 -- # sleep 0.1 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i++ )) 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # n=4 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@120 -- # '[' 4 -ne 4 ']' 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@123 -- # return 0 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@74 -- # timing_exit discovery 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@76 -- # timing_enter fio 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:56.672 10:09:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 8 -t randwrite -r 10 -v 00:08:56.672 [global] 00:08:56.672 thread=1 00:08:56.672 invalidate=1 00:08:56.672 rw=randwrite 00:08:56.672 time_based=1 00:08:56.672 runtime=10 00:08:56.672 ioengine=libaio 00:08:56.672 direct=1 00:08:56.672 bs=131072 00:08:56.672 iodepth=8 00:08:56.672 norandommap=0 00:08:56.672 numjobs=1 00:08:56.672 00:08:56.672 verify_dump=1 00:08:56.672 verify_backlog=512 00:08:56.672 verify_state_save=0 00:08:56.672 do_verify=1 00:08:56.672 verify=crc32c-intel 00:08:56.672 [job0] 00:08:56.672 filename=/dev/sdb 00:08:56.672 [job1] 00:08:56.672 filename=/dev/sdd 00:08:56.672 [job2] 00:08:56.672 filename=/dev/sda 00:08:56.672 [job3] 00:08:56.672 filename=/dev/sdc 00:08:56.930 queue_depth set to 113 (sdb) 00:08:56.930 queue_depth set to 113 (sdd) 00:08:56.930 queue_depth set to 113 (sda) 00:08:56.930 queue_depth set to 113 (sdc) 00:08:56.930 job0: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:08:56.930 job1: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:08:56.930 job2: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:08:56.930 job3: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:08:56.930 fio-3.35 00:08:56.930 Starting 4 threads 00:08:56.930 [2024-07-25 10:09:30.175345] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:56.930 [2024-07-25 10:09:30.179044] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:56.930 [2024-07-25 10:09:30.182782] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:56.930 [2024-07-25 10:09:30.186553] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:57.188 [2024-07-25 10:09:30.436249] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:57.444 [2024-07-25 10:09:30.455698] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:57.444 [2024-07-25 10:09:30.472160] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:57.444 [2024-07-25 10:09:30.503285] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:57.444 [2024-07-25 10:09:30.677266] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:57.701 [2024-07-25 10:09:30.707854] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:57.701 [2024-07-25 10:09:30.737427] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:57.701 [2024-07-25 10:09:30.885449] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:57.701 [2024-07-25 10:09:30.921137] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:57.701 [2024-07-25 10:09:30.942392] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:57.958 [2024-07-25 10:09:30.972853] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:57.958 [2024-07-25 10:09:31.137466] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:57.958 [2024-07-25 10:09:31.159980] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:57.958 [2024-07-25 10:09:31.194020] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:58.216 [2024-07-25 10:09:31.223054] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:58.216 [2024-07-25 10:09:31.379026] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:58.216 [2024-07-25 10:09:31.404047] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:58.216 [2024-07-25 10:09:31.432883] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:58.474 [2024-07-25 10:09:31.524500] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:58.474 [2024-07-25 10:09:31.613135] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:58.474 [2024-07-25 10:09:31.641829] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:58.474 [2024-07-25 10:09:31.666958] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:58.733 [2024-07-25 10:09:31.761137] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:58.733 [2024-07-25 10:09:31.907095] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:58.733 [2024-07-25 10:09:31.931907] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:58.733 [2024-07-25 10:09:31.962705] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:58.992 [2024-07-25 10:09:32.091182] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:58.992 [2024-07-25 10:09:32.157387] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:58.992 [2024-07-25 10:09:32.182862] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:58.992 [2024-07-25 10:09:32.208115] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:59.250 [2024-07-25 10:09:32.348789] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:59.250 [2024-07-25 10:09:32.394987] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:59.250 [2024-07-25 10:09:32.418147] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:59.250 [2024-07-25 10:09:32.493068] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:59.508 [2024-07-25 10:09:32.632745] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:59.508 [2024-07-25 10:09:32.657096] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:59.508 [2024-07-25 10:09:32.679820] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:59.508 [2024-07-25 10:09:32.704903] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:59.767 [2024-07-25 10:09:32.845274] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:59.767 [2024-07-25 10:09:32.879839] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:59.767 [2024-07-25 10:09:32.927782] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:59.767 [2024-07-25 10:09:32.955372] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:00.025 [2024-07-25 10:09:33.157415] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:00.025 [2024-07-25 10:09:33.173349] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:00.025 [2024-07-25 10:09:33.237532] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:00.025 [2024-07-25 10:09:33.253060] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:00.282 [2024-07-25 10:09:33.398102] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:00.282 [2024-07-25 10:09:33.460724] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:00.282 [2024-07-25 10:09:33.502896] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:00.539 [2024-07-25 10:09:33.544281] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:00.539 [2024-07-25 10:09:33.732065] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:00.797 [2024-07-25 10:09:33.802380] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:00.798 [2024-07-25 10:09:33.819170] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:00.798 [2024-07-25 10:09:33.963584] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:01.056 [2024-07-25 10:09:34.055511] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:01.056 [2024-07-25 10:09:34.084816] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:01.056 [2024-07-25 10:09:34.122881] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:01.056 [2024-07-25 10:09:34.199942] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:01.314 [2024-07-25 10:09:34.363707] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:01.314 [2024-07-25 10:09:34.387271] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:01.314 [2024-07-25 10:09:34.401966] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:01.314 [2024-07-25 10:09:34.475014] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:01.314 [2024-07-25 10:09:34.568380] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:01.572 [2024-07-25 10:09:34.623919] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:01.572 [2024-07-25 10:09:34.657193] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:01.572 [2024-07-25 10:09:34.726397] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:01.572 [2024-07-25 10:09:34.812710] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:01.830 [2024-07-25 10:09:34.854241] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:01.830 [2024-07-25 10:09:34.913576] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:01.830 [2024-07-25 10:09:34.935957] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.095 [2024-07-25 10:09:35.120882] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.095 [2024-07-25 10:09:35.141810] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.095 [2024-07-25 10:09:35.166865] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.095 [2024-07-25 10:09:35.231971] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.365 [2024-07-25 10:09:35.411286] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.365 [2024-07-25 10:09:35.472607] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.365 [2024-07-25 10:09:35.480928] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.365 [2024-07-25 10:09:35.514379] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.365 [2024-07-25 10:09:35.617501] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.622 [2024-07-25 10:09:35.728041] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.622 [2024-07-25 10:09:35.766937] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.622 [2024-07-25 10:09:35.815265] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.622 [2024-07-25 10:09:35.837362] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.878 [2024-07-25 10:09:35.930630] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.878 [2024-07-25 10:09:35.964044] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.878 [2024-07-25 10:09:36.040862] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:02.878 [2024-07-25 10:09:36.121836] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:03.134 [2024-07-25 10:09:36.221317] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:03.134 [2024-07-25 10:09:36.249676] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:03.134 [2024-07-25 10:09:36.268240] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:03.134 [2024-07-25 10:09:36.300531] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:03.392 [2024-07-25 10:09:36.449547] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:03.392 [2024-07-25 10:09:36.496150] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:03.392 [2024-07-25 10:09:36.532851] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:03.392 [2024-07-25 10:09:36.557876] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:03.392 [2024-07-25 10:09:36.571337] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:03.392 [2024-07-25 10:09:36.590674] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:03.392 [2024-07-25 10:09:36.608523] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:03.650 [2024-07-25 10:09:36.752542] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:03.907 [2024-07-25 10:09:36.978105] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:03.907 [2024-07-25 10:09:37.014858] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:03.907 [2024-07-25 10:09:37.058961] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:04.163 [2024-07-25 10:09:37.225699] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:04.163 [2024-07-25 10:09:37.249354] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:04.163 [2024-07-25 10:09:37.283832] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:04.163 [2024-07-25 10:09:37.305112] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:04.420 [2024-07-25 10:09:37.446217] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:04.420 [2024-07-25 10:09:37.505225] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:04.420 [2024-07-25 10:09:37.562184] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:04.420 [2024-07-25 10:09:37.605999] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:04.676 [2024-07-25 10:09:37.727331] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:04.676 [2024-07-25 10:09:37.767469] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:04.676 [2024-07-25 10:09:37.798992] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:04.676 [2024-07-25 10:09:37.828499] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:04.933 [2024-07-25 10:09:37.945212] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:04.933 [2024-07-25 10:09:37.987927] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:04.933 [2024-07-25 10:09:38.044056] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:04.933 [2024-07-25 10:09:38.073724] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.191 [2024-07-25 10:09:38.209558] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.191 [2024-07-25 10:09:38.239109] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.191 [2024-07-25 10:09:38.360944] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.191 [2024-07-25 10:09:38.378648] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.448 [2024-07-25 10:09:38.452604] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.448 [2024-07-25 10:09:38.501672] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.448 [2024-07-25 10:09:38.607951] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.448 [2024-07-25 10:09:38.636759] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.448 [2024-07-25 10:09:38.705247] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.706 [2024-07-25 10:09:38.732067] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.706 [2024-07-25 10:09:38.886549] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.706 [2024-07-25 10:09:38.927585] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.706 [2024-07-25 10:09:38.955658] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.963 [2024-07-25 10:09:38.984508] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.963 [2024-07-25 10:09:39.054263] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.963 [2024-07-25 10:09:39.149701] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:05.963 [2024-07-25 10:09:39.208948] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:06.219 [2024-07-25 10:09:39.236195] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:06.219 [2024-07-25 10:09:39.292247] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:06.219 [2024-07-25 10:09:39.337150] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:06.219 [2024-07-25 10:09:39.445979] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:06.219 [2024-07-25 10:09:39.467951] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:06.476 [2024-07-25 10:09:39.597493] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:06.476 [2024-07-25 10:09:39.628669] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:06.476 [2024-07-25 10:09:39.722212] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:06.733 [2024-07-25 10:09:39.741345] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:06.733 [2024-07-25 10:09:39.827529] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:06.733 [2024-07-25 10:09:39.879240] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:06.733 [2024-07-25 10:09:39.980746] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:06.990 [2024-07-25 10:09:40.008644] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:06.990 [2024-07-25 10:09:40.129272] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:06.990 [2024-07-25 10:09:40.174659] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:06.990 [2024-07-25 10:09:40.238446] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:07.247 [2024-07-25 10:09:40.260365] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:07.247 [2024-07-25 10:09:40.311529] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:07.247 [2024-07-25 10:09:40.316369] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:07.247 00:09:07.247 job0: (groupid=0, jobs=1): err= 0: pid=65789: Thu Jul 25 10:09:40 2024 00:09:07.247 read: IOPS=905, BW=113MiB/s (119MB/s)(1120MiB/9888msec) 00:09:07.247 slat (usec): min=5, max=2144, avg=19.77, stdev=55.29 00:09:07.247 clat (usec): min=223, max=11093, avg=2974.17, stdev=1255.50 00:09:07.247 lat (usec): min=243, max=11104, avg=2993.94, stdev=1252.06 00:09:07.247 clat percentiles (usec): 00:09:07.247 | 1.00th=[ 840], 5.00th=[ 1221], 10.00th=[ 1532], 20.00th=[ 2089], 00:09:07.247 | 30.00th=[ 2343], 40.00th=[ 2540], 50.00th=[ 2737], 60.00th=[ 2999], 00:09:07.247 | 70.00th=[ 3359], 80.00th=[ 3851], 90.00th=[ 4555], 95.00th=[ 5145], 00:09:07.247 | 99.00th=[ 7242], 99.50th=[ 8291], 99.90th=[10552], 99.95th=[10683], 00:09:07.247 | 99.99th=[11076] 00:09:07.247 write: IOPS=1367, BW=171MiB/s (179MB/s)(1120MiB/6554msec); 0 zone resets 00:09:07.247 slat (usec): min=29, max=9090, avg=78.25, stdev=239.75 00:09:07.247 clat (usec): min=343, max=19921, avg=5705.40, stdev=1895.73 00:09:07.247 lat (usec): min=603, max=19966, avg=5783.65, stdev=1894.00 00:09:07.247 clat percentiles (usec): 00:09:07.247 | 1.00th=[ 1909], 5.00th=[ 2966], 10.00th=[ 3654], 20.00th=[ 4146], 00:09:07.247 | 30.00th=[ 4686], 40.00th=[ 5342], 50.00th=[ 5932], 60.00th=[ 6063], 00:09:07.247 | 70.00th=[ 6194], 80.00th=[ 6521], 90.00th=[ 7767], 95.00th=[ 9503], 00:09:07.247 | 99.00th=[11731], 99.50th=[13042], 99.90th=[15270], 99.95th=[16581], 00:09:07.247 | 99.99th=[19792] 00:09:07.247 bw ( KiB/s): min=98560, max=130816, per=17.28%, avg=114908.42, stdev=9552.07, samples=19 00:09:07.247 iops : min= 770, max= 1022, avg=897.74, stdev=74.43, samples=19 00:09:07.247 lat (usec) : 250=0.01%, 500=0.13%, 750=0.22%, 1000=0.79% 00:09:07.247 lat (msec) : 2=8.33%, 4=39.80%, 10=48.71%, 20=2.01% 00:09:07.247 cpu : usr=6.93%, sys=3.07%, ctx=14669, majf=0, minf=1 00:09:07.247 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.247 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.247 issued rwts: total=8957,8960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.247 latency : target=0, window=0, percentile=100.00%, depth=8 00:09:07.247 job1: (groupid=0, jobs=1): err= 0: pid=65790: Thu Jul 25 10:09:40 2024 00:09:07.247 read: IOPS=892, BW=112MiB/s (117MB/s)(1100MiB/9864msec) 00:09:07.247 slat (usec): min=5, max=5911, avg=22.02, stdev=112.94 00:09:07.247 clat (usec): min=183, max=11159, avg=2981.75, stdev=1296.28 00:09:07.247 lat (usec): min=255, max=11192, avg=3003.77, stdev=1294.13 00:09:07.247 clat percentiles (usec): 00:09:07.247 | 1.00th=[ 766], 5.00th=[ 1123], 10.00th=[ 1385], 20.00th=[ 2057], 00:09:07.247 | 30.00th=[ 2343], 40.00th=[ 2573], 50.00th=[ 2769], 60.00th=[ 2999], 00:09:07.247 | 70.00th=[ 3392], 80.00th=[ 3949], 90.00th=[ 4621], 95.00th=[ 5276], 00:09:07.247 | 99.00th=[ 7242], 99.50th=[ 7963], 99.90th=[ 9241], 99.95th=[ 9634], 00:09:07.247 | 99.99th=[11207] 00:09:07.247 write: IOPS=1353, BW=169MiB/s (177MB/s)(1117MiB/6601msec); 0 zone resets 00:09:07.248 slat (usec): min=27, max=9445, avg=77.81, stdev=269.39 00:09:07.248 clat (usec): min=509, max=18349, avg=5743.87, stdev=1854.35 00:09:07.248 lat (usec): min=630, max=18412, avg=5821.68, stdev=1852.51 00:09:07.248 clat percentiles (usec): 00:09:07.248 | 1.00th=[ 2040], 5.00th=[ 3163], 10.00th=[ 3752], 20.00th=[ 4178], 00:09:07.248 | 30.00th=[ 4752], 40.00th=[ 5342], 50.00th=[ 5932], 60.00th=[ 6063], 00:09:07.248 | 70.00th=[ 6194], 80.00th=[ 6587], 90.00th=[ 7767], 95.00th=[ 9372], 00:09:07.248 | 99.00th=[11731], 99.50th=[13042], 99.90th=[14746], 99.95th=[16188], 00:09:07.248 | 99.99th=[18220] 00:09:07.248 bw ( KiB/s): min=100096, max=129277, per=17.16%, avg=114145.21, stdev=8407.01, samples=19 00:09:07.248 iops : min= 782, max= 1009, avg=891.63, stdev=65.57, samples=19 00:09:07.248 lat (usec) : 250=0.02%, 500=0.12%, 750=0.37%, 1000=0.84% 00:09:07.248 lat (msec) : 2=8.52%, 4=38.18%, 10=50.03%, 20=1.92% 00:09:07.248 cpu : usr=6.59%, sys=3.14%, ctx=14699, majf=0, minf=1 00:09:07.248 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.248 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.248 issued rwts: total=8800,8937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.248 latency : target=0, window=0, percentile=100.00%, depth=8 00:09:07.248 job2: (groupid=0, jobs=1): err= 0: pid=65795: Thu Jul 25 10:09:40 2024 00:09:07.248 read: IOPS=828, BW=104MiB/s (109MB/s)(1020MiB/9846msec) 00:09:07.248 slat (usec): min=5, max=2710, avg=18.97, stdev=47.90 00:09:07.248 clat (usec): min=261, max=11297, avg=3379.49, stdev=1428.60 00:09:07.248 lat (usec): min=329, max=11312, avg=3398.46, stdev=1426.30 00:09:07.248 clat percentiles (usec): 00:09:07.248 | 1.00th=[ 955], 5.00th=[ 1254], 10.00th=[ 1500], 20.00th=[ 2245], 00:09:07.248 | 30.00th=[ 2638], 40.00th=[ 2933], 50.00th=[ 3261], 60.00th=[ 3621], 00:09:07.248 | 70.00th=[ 4015], 80.00th=[ 4359], 90.00th=[ 5080], 95.00th=[ 5800], 00:09:07.248 | 99.00th=[ 7963], 99.50th=[ 8717], 99.90th=[10028], 99.95th=[10552], 00:09:07.248 | 99.99th=[11338] 00:09:07.248 write: IOPS=1277, BW=160MiB/s (167MB/s)(1030MiB/6449msec); 0 zone resets 00:09:07.248 slat (usec): min=29, max=7518, avg=82.69, stdev=268.67 00:09:07.248 clat (usec): min=640, max=17611, avg=6067.00, stdev=1898.73 00:09:07.248 lat (usec): min=769, max=21313, avg=6149.69, stdev=1902.83 00:09:07.248 clat percentiles (usec): 00:09:07.248 | 1.00th=[ 2212], 5.00th=[ 3163], 10.00th=[ 3818], 20.00th=[ 4686], 00:09:07.248 | 30.00th=[ 5211], 40.00th=[ 5800], 50.00th=[ 5997], 60.00th=[ 6128], 00:09:07.248 | 70.00th=[ 6587], 80.00th=[ 7242], 90.00th=[ 8094], 95.00th=[ 9503], 00:09:07.248 | 99.00th=[12387], 99.50th=[13960], 99.90th=[14746], 99.95th=[15270], 00:09:07.248 | 99.99th=[17695] 00:09:07.248 bw ( KiB/s): min=81920, max=122880, per=15.88%, avg=105591.16, stdev=12848.41, samples=19 00:09:07.248 iops : min= 640, max= 960, avg=824.89, stdev=100.39, samples=19 00:09:07.248 lat (usec) : 500=0.05%, 750=0.21%, 1000=0.38% 00:09:07.248 lat (msec) : 2=7.80%, 4=32.24%, 10=57.21%, 20=2.10% 00:09:07.248 cpu : usr=6.75%, sys=2.76%, ctx=13603, majf=0, minf=1 00:09:07.248 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.248 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.248 issued rwts: total=8160,8238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.248 latency : target=0, window=0, percentile=100.00%, depth=8 00:09:07.248 job3: (groupid=0, jobs=1): err= 0: pid=65796: Thu Jul 25 10:09:40 2024 00:09:07.248 read: IOPS=816, BW=102MiB/s (107MB/s)(1003MiB/9833msec) 00:09:07.248 slat (usec): min=5, max=2564, avg=21.66, stdev=69.04 00:09:07.248 clat (usec): min=256, max=13537, avg=3486.58, stdev=1469.37 00:09:07.248 lat (usec): min=290, max=13547, avg=3508.24, stdev=1465.17 00:09:07.248 clat percentiles (usec): 00:09:07.248 | 1.00th=[ 947], 5.00th=[ 1237], 10.00th=[ 1483], 20.00th=[ 2311], 00:09:07.248 | 30.00th=[ 2704], 40.00th=[ 3032], 50.00th=[ 3392], 60.00th=[ 3785], 00:09:07.248 | 70.00th=[ 4178], 80.00th=[ 4555], 90.00th=[ 5211], 95.00th=[ 5932], 00:09:07.248 | 99.00th=[ 7701], 99.50th=[ 8455], 99.90th=[11076], 99.95th=[12780], 00:09:07.248 | 99.99th=[13566] 00:09:07.248 write: IOPS=1278, BW=160MiB/s (168MB/s)(1020MiB/6384msec); 0 zone resets 00:09:07.248 slat (usec): min=30, max=5739, avg=80.36, stdev=212.26 00:09:07.248 clat (usec): min=675, max=19060, avg=6044.85, stdev=1832.50 00:09:07.248 lat (usec): min=741, max=19127, avg=6125.21, stdev=1834.58 00:09:07.248 clat percentiles (usec): 00:09:07.248 | 1.00th=[ 2180], 5.00th=[ 3326], 10.00th=[ 3949], 20.00th=[ 4686], 00:09:07.248 | 30.00th=[ 5211], 40.00th=[ 5735], 50.00th=[ 5997], 60.00th=[ 6128], 00:09:07.248 | 70.00th=[ 6521], 80.00th=[ 7242], 90.00th=[ 8160], 95.00th=[ 9372], 00:09:07.248 | 99.00th=[11863], 99.50th=[13173], 99.90th=[14746], 99.95th=[15533], 00:09:07.248 | 99.99th=[19006] 00:09:07.248 bw ( KiB/s): min=81920, max=120064, per=15.56%, avg=103446.89, stdev=12293.50, samples=19 00:09:07.248 iops : min= 640, max= 938, avg=808.11, stdev=96.08, samples=19 00:09:07.248 lat (usec) : 500=0.06%, 750=0.13%, 1000=0.54% 00:09:07.248 lat (msec) : 2=7.27%, 4=29.70%, 10=60.32%, 20=1.98% 00:09:07.248 cpu : usr=6.57%, sys=2.83%, ctx=13717, majf=0, minf=1 00:09:07.248 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.248 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.248 issued rwts: total=8027,8160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.248 latency : target=0, window=0, percentile=100.00%, depth=8 00:09:07.248 00:09:07.248 Run status group 0 (all jobs): 00:09:07.248 READ: bw=429MiB/s (450MB/s), 102MiB/s-113MiB/s (107MB/s-119MB/s), io=4243MiB (4449MB), run=9833-9888msec 00:09:07.248 WRITE: bw=649MiB/s (681MB/s), 160MiB/s-171MiB/s (167MB/s-179MB/s), io=4287MiB (4495MB), run=6384-6601msec 00:09:07.248 00:09:07.248 Disk stats (read/write): 00:09:07.248 sdb: ios=10330/8894, merge=0/0, ticks=28577/48162, in_queue=76739, util=97.93% 00:09:07.248 sdd: ios=10291/8800, merge=0/0, ticks=28483/47714, in_queue=76198, util=97.62% 00:09:07.248 sda: ios=9605/8160, merge=0/0, ticks=29960/46589, in_queue=76549, util=97.67% 00:09:07.248 sdc: ios=9512/8021, merge=0/0, ticks=30378/45865, in_queue=76244, util=97.37% 00:09:07.248 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@78 -- # timing_exit fio 00:09:07.248 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:07.248 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:07.248 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@80 -- # rm -f ./local-job0-0-verify.state 00:09:07.248 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:09:07.248 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@83 -- # rm -f 00:09:07.248 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@84 -- # iscsicleanup 00:09:07.248 Cleaning up iSCSI connection 00:09:07.248 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:09:07.248 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:09:07.248 Logging out of session [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:09:07.248 Logging out of session [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:09:07.248 Logout of [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:09:07.248 Logout of [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:09:07.506 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:09:07.506 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@983 -- # rm -rf 00:09:07.506 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@85 -- # killprocess 65526 00:09:07.506 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@948 -- # '[' -z 65526 ']' 00:09:07.506 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@952 -- # kill -0 65526 00:09:07.506 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@953 -- # uname 00:09:07.506 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:07.506 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65526 00:09:07.506 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:07.506 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:07.506 killing process with pid 65526 00:09:07.507 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65526' 00:09:07.507 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@967 -- # kill 65526 00:09:07.507 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@972 -- # wait 65526 00:09:07.768 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@86 -- # iscsitestfini 00:09:07.768 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:09:07.768 00:09:07.768 real 0m17.436s 00:09:07.768 user 1m5.812s 00:09:07.768 sys 0m7.458s 00:09:07.768 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.768 ************************************ 00:09:07.768 END TEST iscsi_tgt_iscsi_lvol 00:09:07.768 ************************************ 00:09:07.768 10:09:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:08.032 10:09:41 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:09:08.032 10:09:41 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@37 -- # run_test iscsi_tgt_fio /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:09:08.032 10:09:41 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:08.032 10:09:41 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.032 10:09:41 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:09:08.032 ************************************ 00:09:08.032 START TEST iscsi_tgt_fio 00:09:08.032 ************************************ 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:09:08.032 * Looking for test storage... 00:09:08.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@11 -- # iscsitestinit 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@48 -- # '[' -z 10.0.0.1 ']' 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@53 -- # '[' -z 10.0.0.2 ']' 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@58 -- # MALLOC_BDEV_SIZE=64 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@59 -- # MALLOC_BLOCK_SIZE=4096 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@60 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@61 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@63 -- # timing_enter start_iscsi_tgt 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@66 -- # pid=66936 00:09:08.032 Process pid: 66936 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@67 -- # echo 'Process pid: 66936' 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@69 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@71 -- # waitforlisten 66936 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@829 -- # '[' -z 66936 ']' 00:09:08.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.032 10:09:41 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.033 10:09:41 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.033 10:09:41 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:09:08.033 10:09:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@65 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:09:08.033 [2024-07-25 10:09:41.205962] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:09:08.033 [2024-07-25 10:09:41.206031] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66936 ] 00:09:08.290 [2024-07-25 10:09:41.343253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.290 [2024-07-25 10:09:41.441078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.923 10:09:42 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:08.923 10:09:42 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@862 -- # return 0 00:09:08.923 10:09:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:09:09.487 iscsi_tgt is listening. Running tests... 00:09:09.487 10:09:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@75 -- # echo 'iscsi_tgt is listening. Running tests...' 00:09:09.487 10:09:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@77 -- # timing_exit start_iscsi_tgt 00:09:09.487 10:09:42 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:09.487 10:09:42 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:09:09.487 10:09:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:09:09.745 10:09:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:09:10.004 10:09:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:09:10.263 10:09:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # malloc_bdevs='Malloc0 ' 00:09:10.263 10:09:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:09:10.523 10:09:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # malloc_bdevs+=Malloc1 00:09:10.523 10:09:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:10.788 10:09:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 1024 512 00:09:11.046 10:09:44 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # bdev=Malloc2 00:09:11.047 10:09:44 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias 'raid0:0 Malloc2:1' 1:2 64 -d 00:09:11.304 10:09:44 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@91 -- # sleep 1 00:09:12.241 10:09:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@93 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:09:12.241 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:09:12.241 10:09:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@94 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:09:12.241 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:09:12.241 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:09:12.241 10:09:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@95 -- # waitforiscsidevices 2 00:09:12.241 10:09:45 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@116 -- # local num=2 00:09:12.241 10:09:45 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:09:12.241 [2024-07-25 10:09:45.432399] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:12.241 10:09:45 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:09:12.241 10:09:45 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:09:12.241 10:09:45 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:09:12.241 [2024-07-25 10:09:45.440013] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:12.241 10:09:45 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # n=2 00:09:12.241 10:09:45 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@120 -- # '[' 2 -ne 2 ']' 00:09:12.241 10:09:45 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@123 -- # return 0 00:09:12.241 10:09:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@97 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; delete_tmp_files; exit 1' SIGINT SIGTERM EXIT 00:09:12.241 10:09:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:09:12.241 [global] 00:09:12.241 thread=1 00:09:12.241 invalidate=1 00:09:12.241 rw=randrw 00:09:12.241 time_based=1 00:09:12.241 runtime=1 00:09:12.241 ioengine=libaio 00:09:12.241 direct=1 00:09:12.241 bs=4096 00:09:12.241 iodepth=1 00:09:12.241 norandommap=0 00:09:12.241 numjobs=1 00:09:12.241 00:09:12.241 verify_dump=1 00:09:12.241 verify_backlog=512 00:09:12.241 verify_state_save=0 00:09:12.241 do_verify=1 00:09:12.241 verify=crc32c-intel 00:09:12.241 [job0] 00:09:12.241 filename=/dev/sda 00:09:12.241 [job1] 00:09:12.241 filename=/dev/sdb 00:09:12.501 queue_depth set to 113 (sda) 00:09:12.501 queue_depth set to 113 (sdb) 00:09:12.501 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.501 job1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.501 fio-3.35 00:09:12.501 Starting 2 threads 00:09:12.501 [2024-07-25 10:09:45.682273] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:12.501 [2024-07-25 10:09:45.686373] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:13.878 [2024-07-25 10:09:46.799649] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:13.878 [2024-07-25 10:09:46.803826] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:13.878 00:09:13.878 job0: (groupid=0, jobs=1): err= 0: pid=67082: Thu Jul 25 10:09:46 2024 00:09:13.878 read: IOPS=6847, BW=26.7MiB/s (28.0MB/s)(26.8MiB/1001msec) 00:09:13.878 slat (nsec): min=2969, max=66151, avg=6102.99, stdev=1915.47 00:09:13.878 clat (usec): min=51, max=3047, avg=88.00, stdev=37.80 00:09:13.878 lat (usec): min=56, max=3060, avg=94.10, stdev=38.02 00:09:13.878 clat percentiles (usec): 00:09:13.878 | 1.00th=[ 59], 5.00th=[ 72], 10.00th=[ 81], 20.00th=[ 84], 00:09:13.878 | 30.00th=[ 86], 40.00th=[ 86], 50.00th=[ 87], 60.00th=[ 88], 00:09:13.878 | 70.00th=[ 89], 80.00th=[ 94], 90.00th=[ 98], 95.00th=[ 102], 00:09:13.878 | 99.00th=[ 115], 99.50th=[ 121], 99.90th=[ 243], 99.95th=[ 326], 00:09:13.878 | 99.99th=[ 3032] 00:09:13.878 bw ( KiB/s): min=13924, max=13924, per=25.72%, avg=13924.00, stdev= 0.00, samples=1 00:09:13.878 iops : min= 3481, max= 3481, avg=3481.00, stdev= 0.00, samples=1 00:09:13.878 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:13.878 slat (nsec): min=3846, max=28208, avg=7292.88, stdev=2311.73 00:09:13.878 clat (usec): min=52, max=5720, avg=89.50, stdev=97.36 00:09:13.878 lat (usec): min=58, max=5729, avg=96.79, stdev=97.46 00:09:13.878 clat percentiles (usec): 00:09:13.878 | 1.00th=[ 56], 5.00th=[ 73], 10.00th=[ 78], 20.00th=[ 81], 00:09:13.878 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 88], 00:09:13.878 | 70.00th=[ 91], 80.00th=[ 95], 90.00th=[ 100], 95.00th=[ 105], 00:09:13.878 | 99.00th=[ 120], 99.50th=[ 130], 99.90th=[ 424], 99.95th=[ 1303], 00:09:13.878 | 99.99th=[ 5735] 00:09:13.878 bw ( KiB/s): min=14443, max=14443, per=50.42%, avg=14443.00, stdev= 0.00, samples=1 00:09:13.878 iops : min= 3610, max= 3610, avg=3610.00, stdev= 0.00, samples=1 00:09:13.878 lat (usec) : 100=92.74%, 250=7.16%, 500=0.07%, 750=0.01% 00:09:13.878 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:09:13.878 cpu : usr=3.60%, sys=9.10%, ctx=10438, majf=0, minf=7 00:09:13.878 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.878 issued rwts: total=6854,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.878 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.878 job1: (groupid=0, jobs=1): err= 0: pid=67083: Thu Jul 25 10:09:46 2024 00:09:13.878 read: IOPS=6688, BW=26.1MiB/s (27.4MB/s)(26.2MiB/1001msec) 00:09:13.878 slat (nsec): min=4010, max=58947, avg=8122.67, stdev=2342.65 00:09:13.878 clat (usec): min=45, max=3603, avg=85.99, stdev=61.53 00:09:13.878 lat (usec): min=52, max=3612, avg=94.11, stdev=61.60 00:09:13.878 clat percentiles (usec): 00:09:13.878 | 1.00th=[ 61], 5.00th=[ 73], 10.00th=[ 76], 20.00th=[ 80], 00:09:13.878 | 30.00th=[ 81], 40.00th=[ 84], 50.00th=[ 84], 60.00th=[ 85], 00:09:13.878 | 70.00th=[ 87], 80.00th=[ 91], 90.00th=[ 95], 95.00th=[ 100], 00:09:13.878 | 99.00th=[ 120], 99.50th=[ 127], 99.90th=[ 172], 99.95th=[ 338], 00:09:13.878 | 99.99th=[ 3589] 00:09:13.878 bw ( KiB/s): min=14275, max=14275, per=26.37%, avg=14275.00, stdev= 0.00, samples=1 00:09:13.878 iops : min= 3568, max= 3568, avg=3568.00, stdev= 0.00, samples=1 00:09:13.878 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:13.878 slat (nsec): min=5022, max=41644, avg=9511.95, stdev=2850.00 00:09:13.878 clat (usec): min=50, max=3187, avg=90.90, stdev=65.48 00:09:13.878 lat (usec): min=59, max=3205, avg=100.41, stdev=65.67 00:09:13.878 clat percentiles (usec): 00:09:13.878 | 1.00th=[ 68], 5.00th=[ 75], 10.00th=[ 78], 20.00th=[ 82], 00:09:13.879 | 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 90], 00:09:13.879 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 102], 95.00th=[ 111], 00:09:13.879 | 99.00th=[ 133], 99.50th=[ 141], 99.90th=[ 289], 99.95th=[ 2311], 00:09:13.879 | 99.99th=[ 3195] 00:09:13.879 bw ( KiB/s): min=15073, max=15073, per=52.62%, avg=15073.00, stdev= 0.00, samples=1 00:09:13.879 iops : min= 3768, max= 3768, avg=3768.00, stdev= 0.00, samples=1 00:09:13.879 lat (usec) : 50=0.03%, 100=92.29%, 250=7.60%, 500=0.03%, 750=0.01% 00:09:13.879 lat (msec) : 2=0.01%, 4=0.04% 00:09:13.879 cpu : usr=3.70%, sys=11.80%, ctx=10279, majf=0, minf=7 00:09:13.879 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.879 issued rwts: total=6695,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.879 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.879 00:09:13.879 Run status group 0 (all jobs): 00:09:13.879 READ: bw=52.9MiB/s (55.4MB/s), 26.1MiB/s-26.7MiB/s (27.4MB/s-28.0MB/s), io=52.9MiB (55.5MB), run=1001-1001msec 00:09:13.879 WRITE: bw=28.0MiB/s (29.3MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:09:13.879 00:09:13.879 Disk stats (read/write): 00:09:13.879 sda: ios=6089/3091, merge=0/0, ticks=532/266, in_queue=798, util=89.64% 00:09:13.879 sdb: ios=5997/3072, merge=0/0, ticks=508/272, in_queue=780, util=89.81% 00:09:13.879 10:09:46 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:09:13.879 [global] 00:09:13.879 thread=1 00:09:13.879 invalidate=1 00:09:13.879 rw=randrw 00:09:13.879 time_based=1 00:09:13.879 runtime=1 00:09:13.879 ioengine=libaio 00:09:13.879 direct=1 00:09:13.879 bs=131072 00:09:13.879 iodepth=32 00:09:13.879 norandommap=0 00:09:13.879 numjobs=1 00:09:13.879 00:09:13.879 verify_dump=1 00:09:13.879 verify_backlog=512 00:09:13.879 verify_state_save=0 00:09:13.879 do_verify=1 00:09:13.879 verify=crc32c-intel 00:09:13.879 [job0] 00:09:13.879 filename=/dev/sda 00:09:13.879 [job1] 00:09:13.879 filename=/dev/sdb 00:09:13.879 queue_depth set to 113 (sda) 00:09:13.879 queue_depth set to 113 (sdb) 00:09:13.879 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:09:13.879 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:09:13.879 fio-3.35 00:09:13.879 Starting 2 threads 00:09:13.879 [2024-07-25 10:09:47.035689] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:13.879 [2024-07-25 10:09:47.039305] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:14.816 [2024-07-25 10:09:48.038441] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:15.077 [2024-07-25 10:09:48.175077] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:15.077 00:09:15.078 job0: (groupid=0, jobs=1): err= 0: pid=67147: Thu Jul 25 10:09:48 2024 00:09:15.078 read: IOPS=1630, BW=204MiB/s (214MB/s)(207MiB/1013msec) 00:09:15.078 slat (usec): min=6, max=206, avg=19.00, stdev=12.10 00:09:15.078 clat (usec): min=1273, max=27499, avg=10438.90, stdev=4726.87 00:09:15.078 lat (usec): min=1284, max=27511, avg=10457.90, stdev=4727.33 00:09:15.078 clat percentiles (usec): 00:09:15.078 | 1.00th=[ 2376], 5.00th=[ 4686], 10.00th=[ 5014], 20.00th=[ 5604], 00:09:15.078 | 30.00th=[ 6063], 40.00th=[ 8094], 50.00th=[10290], 60.00th=[12649], 00:09:15.078 | 70.00th=[13829], 80.00th=[14877], 90.00th=[16188], 95.00th=[17433], 00:09:15.078 | 99.00th=[21627], 99.50th=[24773], 99.90th=[27132], 99.95th=[27395], 00:09:15.078 | 99.99th=[27395] 00:09:15.078 bw ( KiB/s): min=97024, max=122356, per=30.67%, avg=109690.00, stdev=17912.43, samples=2 00:09:15.078 iops : min= 758, max= 955, avg=856.50, stdev=139.30, samples=2 00:09:15.078 write: IOPS=955, BW=119MiB/s (125MB/s)(112MiB/934msec); 0 zone resets 00:09:15.078 slat (usec): min=43, max=163, avg=76.41, stdev=19.77 00:09:15.078 clat (usec): min=9408, max=29009, avg=16584.42, stdev=2928.88 00:09:15.078 lat (usec): min=9493, max=29065, avg=16660.83, stdev=2931.60 00:09:15.078 clat percentiles (usec): 00:09:15.078 | 1.00th=[10421], 5.00th=[12256], 10.00th=[13829], 20.00th=[14615], 00:09:15.078 | 30.00th=[15270], 40.00th=[15926], 50.00th=[16450], 60.00th=[16909], 00:09:15.078 | 70.00th=[17433], 80.00th=[17957], 90.00th=[19268], 95.00th=[22414], 00:09:15.078 | 99.00th=[27657], 99.50th=[28443], 99.90th=[28967], 99.95th=[28967], 00:09:15.078 | 99.99th=[28967] 00:09:15.078 bw ( KiB/s): min=93696, max=131334, per=46.47%, avg=112515.00, stdev=26614.09, samples=2 00:09:15.078 iops : min= 732, max= 1026, avg=879.00, stdev=207.89, samples=2 00:09:15.078 lat (msec) : 2=0.55%, 4=1.10%, 10=29.40%, 20=65.13%, 50=3.81% 00:09:15.078 cpu : usr=8.70%, sys=5.34%, ctx=2337, majf=0, minf=9 00:09:15.078 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=96.3%, >=64=0.0% 00:09:15.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.078 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:09:15.078 issued rwts: total=1652,892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.078 latency : target=0, window=0, percentile=100.00%, depth=32 00:09:15.078 job1: (groupid=0, jobs=1): err= 0: pid=67148: Thu Jul 25 10:09:48 2024 00:09:15.078 read: IOPS=1164, BW=146MiB/s (153MB/s)(147MiB/1012msec) 00:09:15.078 slat (usec): min=6, max=536, avg=18.30, stdev=19.26 00:09:15.078 clat (usec): min=1612, max=27653, avg=12303.11, stdev=4227.72 00:09:15.078 lat (usec): min=1638, max=27665, avg=12321.41, stdev=4226.65 00:09:15.078 clat percentiles (usec): 00:09:15.078 | 1.00th=[ 2278], 5.00th=[ 5342], 10.00th=[ 6390], 20.00th=[ 8586], 00:09:15.078 | 30.00th=[10159], 40.00th=[11207], 50.00th=[12911], 60.00th=[14091], 00:09:15.078 | 70.00th=[14877], 80.00th=[15795], 90.00th=[17171], 95.00th=[18482], 00:09:15.078 | 99.00th=[21103], 99.50th=[23725], 99.90th=[27657], 99.95th=[27657], 00:09:15.078 | 99.99th=[27657] 00:09:15.078 bw ( KiB/s): min=116736, max=123648, per=33.61%, avg=120192.00, stdev=4887.52, samples=2 00:09:15.078 iops : min= 912, max= 966, avg=939.00, stdev=38.18, samples=2 00:09:15.078 write: IOPS=1010, BW=126MiB/s (132MB/s)(128MiB/1013msec); 0 zone resets 00:09:15.078 slat (usec): min=29, max=467, avg=79.07, stdev=24.02 00:09:15.078 clat (usec): min=1804, max=37735, avg=17276.95, stdev=4675.50 00:09:15.078 lat (usec): min=1880, max=37796, avg=17356.02, stdev=4674.98 00:09:15.078 clat percentiles (usec): 00:09:15.078 | 1.00th=[ 6849], 5.00th=[10945], 10.00th=[13829], 20.00th=[14877], 00:09:15.078 | 30.00th=[15533], 40.00th=[16188], 50.00th=[16581], 60.00th=[16909], 00:09:15.078 | 70.00th=[17695], 80.00th=[18744], 90.00th=[22414], 95.00th=[26346], 00:09:15.078 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[37487], 00:09:15.078 | 99.99th=[37487] 00:09:15.078 bw ( KiB/s): min=124928, max=137216, per=54.14%, avg=131072.00, stdev=8688.93, samples=2 00:09:15.078 iops : min= 976, max= 1072, avg=1024.00, stdev=67.88, samples=2 00:09:15.078 lat (msec) : 2=0.32%, 4=1.45%, 10=15.44%, 20=74.57%, 50=8.22% 00:09:15.078 cpu : usr=8.70%, sys=3.85%, ctx=1999, majf=0, minf=9 00:09:15.078 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=98.6%, >=64=0.0% 00:09:15.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:09:15.078 issued rwts: total=1178,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.078 latency : target=0, window=0, percentile=100.00%, depth=32 00:09:15.078 00:09:15.078 Run status group 0 (all jobs): 00:09:15.078 READ: bw=349MiB/s (366MB/s), 146MiB/s-204MiB/s (153MB/s-214MB/s), io=354MiB (371MB), run=1012-1013msec 00:09:15.078 WRITE: bw=236MiB/s (248MB/s), 119MiB/s-126MiB/s (125MB/s-132MB/s), io=240MiB (251MB), run=934-1013msec 00:09:15.078 00:09:15.078 Disk stats (read/write): 00:09:15.078 sda: ios=1387/778, merge=0/0, ticks=14072/13117, in_queue=27189, util=89.52% 00:09:15.078 sdb: ios=911/942, merge=0/0, ticks=11130/16395, in_queue=27525, util=90.09% 00:09:15.078 10:09:48 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@101 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 524288 -d 128 -t randrw -r 1 -v 00:09:15.078 [global] 00:09:15.078 thread=1 00:09:15.078 invalidate=1 00:09:15.078 rw=randrw 00:09:15.078 time_based=1 00:09:15.078 runtime=1 00:09:15.078 ioengine=libaio 00:09:15.078 direct=1 00:09:15.078 bs=524288 00:09:15.078 iodepth=128 00:09:15.078 norandommap=0 00:09:15.078 numjobs=1 00:09:15.078 00:09:15.078 verify_dump=1 00:09:15.078 verify_backlog=512 00:09:15.078 verify_state_save=0 00:09:15.078 do_verify=1 00:09:15.078 verify=crc32c-intel 00:09:15.078 [job0] 00:09:15.078 filename=/dev/sda 00:09:15.078 [job1] 00:09:15.078 filename=/dev/sdb 00:09:15.078 queue_depth set to 113 (sda) 00:09:15.078 queue_depth set to 113 (sdb) 00:09:15.348 job0: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:09:15.348 job1: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:09:15.348 fio-3.35 00:09:15.348 Starting 2 threads 00:09:15.348 [2024-07-25 10:09:48.429701] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:15.348 [2024-07-25 10:09:48.431872] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:16.285 [2024-07-25 10:09:49.494262] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:16.544 [2024-07-25 10:09:49.719511] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:16.544 00:09:16.544 job0: (groupid=0, jobs=1): err= 0: pid=67212: Thu Jul 25 10:09:49 2024 00:09:16.544 read: IOPS=370, BW=185MiB/s (194MB/s)(202MiB/1090msec) 00:09:16.545 slat (usec): min=20, max=49802, avg=1342.99, stdev=4026.01 00:09:16.545 clat (msec): min=66, max=298, avg=179.22, stdev=55.11 00:09:16.545 lat (msec): min=75, max=298, avg=180.56, stdev=55.06 00:09:16.545 clat percentiles (msec): 00:09:16.545 | 1.00th=[ 84], 5.00th=[ 91], 10.00th=[ 106], 20.00th=[ 133], 00:09:16.545 | 30.00th=[ 146], 40.00th=[ 163], 50.00th=[ 176], 60.00th=[ 194], 00:09:16.545 | 70.00th=[ 207], 80.00th=[ 226], 90.00th=[ 253], 95.00th=[ 296], 00:09:16.545 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:09:16.545 | 99.99th=[ 300] 00:09:16.545 bw ( KiB/s): min=115943, max=132096, per=33.94%, avg=124019.50, stdev=11421.90, samples=2 00:09:16.545 iops : min= 226, max= 258, avg=242.00, stdev=22.63, samples=2 00:09:16.545 write: IOPS=349, BW=175MiB/s (183MB/s)(135MiB/772msec); 0 zone resets 00:09:16.545 slat (usec): min=167, max=14942, avg=1404.44, stdev=2467.39 00:09:16.545 clat (msec): min=74, max=298, avg=185.77, stdev=46.50 00:09:16.545 lat (msec): min=75, max=299, avg=187.17, stdev=46.78 00:09:16.545 clat percentiles (msec): 00:09:16.545 | 1.00th=[ 84], 5.00th=[ 106], 10.00th=[ 131], 20.00th=[ 148], 00:09:16.545 | 30.00th=[ 159], 40.00th=[ 167], 50.00th=[ 186], 60.00th=[ 201], 00:09:16.545 | 70.00th=[ 209], 80.00th=[ 222], 90.00th=[ 257], 95.00th=[ 271], 00:09:16.545 | 99.00th=[ 288], 99.50th=[ 292], 99.90th=[ 300], 99.95th=[ 300], 00:09:16.545 | 99.99th=[ 300] 00:09:16.545 bw ( KiB/s): min=107735, max=168960, per=41.83%, avg=138347.50, stdev=43292.61, samples=2 00:09:16.545 iops : min= 210, max= 330, avg=270.00, stdev=84.85, samples=2 00:09:16.545 lat (msec) : 100=6.68%, 250=81.75%, 500=11.57% 00:09:16.545 cpu : usr=9.09%, sys=2.57%, ctx=362, majf=0, minf=9 00:09:16.545 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.7%, 32=9.5%, >=64=81.3% 00:09:16.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.545 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:09:16.545 issued rwts: total=404,270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.545 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.545 job1: (groupid=0, jobs=1): err= 0: pid=67213: Thu Jul 25 10:09:49 2024 00:09:16.545 read: IOPS=344, BW=172MiB/s (180MB/s)(188MiB/1093msec) 00:09:16.545 slat (usec): min=18, max=28058, avg=1236.67, stdev=2660.02 00:09:16.545 clat (msec): min=94, max=248, avg=155.72, stdev=40.77 00:09:16.545 lat (msec): min=94, max=248, avg=156.96, stdev=40.87 00:09:16.545 clat percentiles (msec): 00:09:16.545 | 1.00th=[ 97], 5.00th=[ 107], 10.00th=[ 114], 20.00th=[ 122], 00:09:16.545 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 138], 60.00th=[ 148], 00:09:16.545 | 70.00th=[ 192], 80.00th=[ 205], 90.00th=[ 218], 95.00th=[ 228], 00:09:16.545 | 99.00th=[ 239], 99.50th=[ 245], 99.90th=[ 249], 99.95th=[ 249], 00:09:16.545 | 99.99th=[ 249] 00:09:16.545 bw ( KiB/s): min=110592, max=219136, per=45.12%, avg=164864.00, stdev=76752.20, samples=2 00:09:16.545 iops : min= 216, max= 428, avg=322.00, stdev=149.91, samples=2 00:09:16.545 write: IOPS=398, BW=199MiB/s (209MB/s)(218MiB/1093msec); 0 zone resets 00:09:16.545 slat (usec): min=141, max=15024, avg=1225.21, stdev=2305.26 00:09:16.545 clat (msec): min=89, max=265, avg=171.85, stdev=39.67 00:09:16.545 lat (msec): min=93, max=265, avg=173.08, stdev=39.88 00:09:16.545 clat percentiles (msec): 00:09:16.545 | 1.00th=[ 95], 5.00th=[ 125], 10.00th=[ 136], 20.00th=[ 144], 00:09:16.545 | 30.00th=[ 148], 40.00th=[ 153], 50.00th=[ 161], 60.00th=[ 165], 00:09:16.545 | 70.00th=[ 188], 80.00th=[ 213], 90.00th=[ 232], 95.00th=[ 249], 00:09:16.545 | 99.00th=[ 262], 99.50th=[ 264], 99.90th=[ 266], 99.95th=[ 266], 00:09:16.545 | 99.99th=[ 266] 00:09:16.545 bw ( KiB/s): min=118784, max=251904, per=56.04%, avg=185344.00, stdev=94130.05, samples=2 00:09:16.545 iops : min= 232, max= 492, avg=362.00, stdev=183.85, samples=2 00:09:16.545 lat (msec) : 100=2.46%, 250=95.94%, 500=1.60% 00:09:16.545 cpu : usr=10.53%, sys=3.21%, ctx=285, majf=0, minf=5 00:09:16.545 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.2% 00:09:16.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.545 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.545 issued rwts: total=376,436,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.545 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.545 00:09:16.545 Run status group 0 (all jobs): 00:09:16.545 READ: bw=357MiB/s (374MB/s), 172MiB/s-185MiB/s (180MB/s-194MB/s), io=390MiB (409MB), run=1090-1093msec 00:09:16.545 WRITE: bw=323MiB/s (339MB/s), 175MiB/s-199MiB/s (183MB/s-209MB/s), io=353MiB (370MB), run=772-1093msec 00:09:16.545 00:09:16.545 Disk stats (read/write): 00:09:16.545 sda: ios=425/270, merge=0/0, ticks=29450/23529, in_queue=52979, util=80.94% 00:09:16.545 sdb: ios=419/421, merge=0/0, ticks=22368/33050, in_queue=55417, util=81.45% 00:09:16.545 10:09:49 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 1024 -t read -r 1 -n 4 00:09:16.803 [global] 00:09:16.803 thread=1 00:09:16.803 invalidate=1 00:09:16.803 rw=read 00:09:16.803 time_based=1 00:09:16.803 runtime=1 00:09:16.803 ioengine=libaio 00:09:16.803 direct=1 00:09:16.803 bs=1048576 00:09:16.803 iodepth=1024 00:09:16.803 norandommap=1 00:09:16.803 numjobs=4 00:09:16.803 00:09:16.803 [job0] 00:09:16.803 filename=/dev/sda 00:09:16.803 [job1] 00:09:16.803 filename=/dev/sdb 00:09:16.803 queue_depth set to 113 (sda) 00:09:16.803 queue_depth set to 113 (sdb) 00:09:16.803 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:09:16.803 ... 00:09:16.803 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:09:16.803 ... 00:09:16.803 fio-3.35 00:09:16.803 Starting 8 threads 00:09:31.922 00:09:31.922 job0: (groupid=0, jobs=1): err= 0: pid=67284: Thu Jul 25 10:10:04 2024 00:09:31.922 read: IOPS=1, BW=1772KiB/s (1814kB/s)(25.0MiB/14448msec) 00:09:31.922 slat (usec): min=529, max=2050.0k, avg=82905.16, stdev=409810.39 00:09:31.922 clat (msec): min=12375, max=14446, avg=14352.51, stdev=412.01 00:09:31.922 lat (msec): min=14425, max=14447, avg=14435.41, stdev= 6.93 00:09:31.922 clat percentiles (msec): 00:09:31.922 | 1.00th=[12416], 5.00th=[14429], 10.00th=[14429], 20.00th=[14429], 00:09:31.922 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:09:31.922 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:09:31.922 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:09:31.922 | 99.99th=[14429] 00:09:31.922 lat (msec) : >=2000=100.00% 00:09:31.922 cpu : usr=0.00%, sys=0.12%, ctx=31, majf=0, minf=6401 00:09:31.922 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:09:31.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.922 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:09:31.922 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.922 latency : target=0, window=0, percentile=100.00%, depth=1024 00:09:31.922 job0: (groupid=0, jobs=1): err= 0: pid=67285: Thu Jul 25 10:10:04 2024 00:09:31.922 read: IOPS=1, BW=1630KiB/s (1669kB/s)(23.0MiB/14448msec) 00:09:31.922 slat (usec): min=535, max=2050.2k, avg=89909.58, stdev=427319.63 00:09:31.922 clat (msec): min=12379, max=14446, avg=14348.09, stdev=429.24 00:09:31.922 lat (msec): min=14429, max=14447, avg=14438.00, stdev= 5.55 00:09:31.922 clat percentiles (msec): 00:09:31.922 | 1.00th=[12416], 5.00th=[14429], 10.00th=[14429], 20.00th=[14429], 00:09:31.922 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:09:31.922 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:09:31.922 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:09:31.922 | 99.99th=[14429] 00:09:31.922 lat (msec) : >=2000=100.00% 00:09:31.922 cpu : usr=0.00%, sys=0.11%, ctx=25, majf=0, minf=5889 00:09:31.922 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:09:31.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.922 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:09:31.922 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.922 latency : target=0, window=0, percentile=100.00%, depth=1024 00:09:31.922 job0: (groupid=0, jobs=1): err= 0: pid=67286: Thu Jul 25 10:10:04 2024 00:09:31.922 read: IOPS=1, BW=1701KiB/s (1741kB/s)(24.0MiB/14452msec) 00:09:31.922 slat (usec): min=556, max=2050.1k, avg=86563.62, stdev=418232.90 00:09:31.922 clat (msec): min=12373, max=14450, avg=14351.76, stdev=421.38 00:09:31.922 lat (msec): min=14423, max=14451, avg=14438.33, stdev= 9.04 00:09:31.922 clat percentiles (msec): 00:09:31.922 | 1.00th=[12416], 5.00th=[14429], 10.00th=[14429], 20.00th=[14429], 00:09:31.922 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:09:31.922 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:09:31.922 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:09:31.922 | 99.99th=[14429] 00:09:31.922 lat (msec) : >=2000=100.00% 00:09:31.922 cpu : usr=0.00%, sys=0.11%, ctx=62, majf=0, minf=6145 00:09:31.922 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:09:31.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.922 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:09:31.922 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.922 latency : target=0, window=0, percentile=100.00%, depth=1024 00:09:31.922 job0: (groupid=0, jobs=1): err= 0: pid=67287: Thu Jul 25 10:10:04 2024 00:09:31.922 read: IOPS=0, BW=638KiB/s (654kB/s)(9216KiB/14438msec) 00:09:31.922 slat (usec): min=529, max=2050.0k, avg=228643.18, stdev=683011.50 00:09:31.922 clat (msec): min=12380, max=14435, avg=14204.43, stdev=684.14 00:09:31.922 lat (msec): min=14430, max=14437, avg=14433.07, stdev= 2.65 00:09:31.922 clat percentiles (msec): 00:09:31.923 | 1.00th=[12416], 5.00th=[12416], 10.00th=[12416], 20.00th=[14429], 00:09:31.923 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:09:31.923 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:09:31.923 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:09:31.923 | 99.99th=[14429] 00:09:31.923 lat (msec) : >=2000=100.00% 00:09:31.923 cpu : usr=0.00%, sys=0.04%, ctx=16, majf=0, minf=2305 00:09:31.923 IO depths : 1=11.1%, 2=22.2%, 4=44.4%, 8=22.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.923 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.923 issued rwts: total=9,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.923 latency : target=0, window=0, percentile=100.00%, depth=1024 00:09:31.923 job1: (groupid=0, jobs=1): err= 0: pid=67288: Thu Jul 25 10:10:04 2024 00:09:31.923 read: IOPS=0, BW=71.0KiB/s (72.7kB/s)(1024KiB/14423msec) 00:09:31.923 slat (nsec): min=4507.3M, max=4507.3M, avg=4507254087.00, stdev= 0.00 00:09:31.923 clat (nsec): min=9915.6M, max=9915.6M, avg=9915575269.00, stdev= 0.00 00:09:31.923 lat (nsec): min=14423M, max=14423M, avg=14422829356.00, stdev= 0.00 00:09:31.923 clat percentiles (msec): 00:09:31.923 | 1.00th=[ 9866], 5.00th=[ 9866], 10.00th=[ 9866], 20.00th=[ 9866], 00:09:31.923 | 30.00th=[ 9866], 40.00th=[ 9866], 50.00th=[ 9866], 60.00th=[ 9866], 00:09:31.923 | 70.00th=[ 9866], 80.00th=[ 9866], 90.00th=[ 9866], 95.00th=[ 9866], 00:09:31.923 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:09:31.923 | 99.99th=[ 9866] 00:09:31.923 lat (msec) : >=2000=100.00% 00:09:31.923 cpu : usr=0.00%, sys=0.00%, ctx=3, majf=0, minf=257 00:09:31.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.923 issued rwts: total=1,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.923 latency : target=0, window=0, percentile=100.00%, depth=1024 00:09:31.923 job1: (groupid=0, jobs=1): err= 0: pid=67289: Thu Jul 25 10:10:04 2024 00:09:31.923 read: IOPS=2, BW=2123KiB/s (2174kB/s)(30.0MiB/14467msec) 00:09:31.923 slat (usec): min=498, max=2044.2k, avg=68834.02, stdev=373083.49 00:09:31.923 clat (msec): min=12401, max=14465, avg=14386.16, stdev=374.90 00:09:31.923 lat (msec): min=14445, max=14466, avg=14454.99, stdev= 6.52 00:09:31.923 clat percentiles (msec): 00:09:31.923 | 1.00th=[12416], 5.00th=[14429], 10.00th=[14429], 20.00th=[14429], 00:09:31.923 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:09:31.923 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:09:31.923 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:09:31.923 | 99.99th=[14429] 00:09:31.923 lat (msec) : >=2000=100.00% 00:09:31.923 cpu : usr=0.00%, sys=0.12%, ctx=52, majf=0, minf=7681 00:09:31.923 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:09:31.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.923 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:09:31.923 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.923 latency : target=0, window=0, percentile=100.00%, depth=1024 00:09:31.923 job1: (groupid=0, jobs=1): err= 0: pid=67290: Thu Jul 25 10:10:04 2024 00:09:31.923 read: IOPS=1, BW=1770KiB/s (1812kB/s)(25.0MiB/14465msec) 00:09:31.923 slat (usec): min=493, max=2044.4k, avg=82557.36, stdev=408713.04 00:09:31.923 clat (msec): min=12400, max=14461, avg=14369.73, stdev=410.38 00:09:31.923 lat (msec): min=14444, max=14464, avg=14452.29, stdev= 5.76 00:09:31.923 clat percentiles (msec): 00:09:31.923 | 1.00th=[12416], 5.00th=[14429], 10.00th=[14429], 20.00th=[14429], 00:09:31.923 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:09:31.923 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:09:31.923 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:09:31.923 | 99.99th=[14429] 00:09:31.923 lat (msec) : >=2000=100.00% 00:09:31.923 cpu : usr=0.01%, sys=0.11%, ctx=39, majf=0, minf=6401 00:09:31.923 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:09:31.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.923 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:09:31.923 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.923 latency : target=0, window=0, percentile=100.00%, depth=1024 00:09:31.923 job1: (groupid=0, jobs=1): err= 0: pid=67291: Thu Jul 25 10:10:04 2024 00:09:31.923 read: IOPS=0, BW=284KiB/s (290kB/s)(4096KiB/14440msec) 00:09:31.923 slat (usec): min=1168, max=4509.2k, avg=1129580.09, stdev=2253091.02 00:09:31.923 clat (msec): min=9920, max=14432, avg=13303.47, stdev=2255.23 00:09:31.923 lat (msec): min=14429, max=14438, avg=14433.05, stdev= 4.07 00:09:31.923 clat percentiles (msec): 00:09:31.923 | 1.00th=[ 9866], 5.00th=[ 9866], 10.00th=[ 9866], 20.00th=[ 9866], 00:09:31.923 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:09:31.923 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:09:31.923 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:09:31.923 | 99.99th=[14429] 00:09:31.923 lat (msec) : >=2000=100.00% 00:09:31.923 cpu : usr=0.00%, sys=0.03%, ctx=10, majf=0, minf=1025 00:09:31.923 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.923 issued rwts: total=4,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.923 latency : target=0, window=0, percentile=100.00%, depth=1024 00:09:31.923 00:09:31.923 Run status group 0 (all jobs): 00:09:31.923 READ: bw=9980KiB/s (10.2MB/s), 71.0KiB/s-2123KiB/s (72.7kB/s-2174kB/s), io=141MiB (148MB), run=14423-14467msec 00:09:31.923 00:09:31.923 Disk stats (read/write): 00:09:31.923 sda: ios=57/0, merge=0/0, ticks=284760/0, in_queue=284761, util=98.06% 00:09:31.923 sdb: ios=20/0, merge=0/0, ticks=158950/0, in_queue=158950, util=99.35% 00:09:31.923 10:10:04 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@104 -- # '[' 0 -eq 1 ']' 00:09:31.923 10:10:04 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@116 -- # fio_pid=67441 00:09:31.923 10:10:04 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@118 -- # sleep 3 00:09:31.923 10:10:04 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 128 -t rw -r 10 00:09:31.923 [global] 00:09:31.923 thread=1 00:09:31.923 invalidate=1 00:09:31.923 rw=rw 00:09:31.923 time_based=1 00:09:31.923 runtime=10 00:09:31.923 ioengine=libaio 00:09:31.923 direct=1 00:09:31.923 bs=1048576 00:09:31.923 iodepth=128 00:09:31.923 norandommap=1 00:09:31.923 numjobs=1 00:09:31.923 00:09:31.923 [job0] 00:09:31.923 filename=/dev/sda 00:09:31.923 [job1] 00:09:31.923 filename=/dev/sdb 00:09:31.923 queue_depth set to 113 (sda) 00:09:31.923 queue_depth set to 113 (sdb) 00:09:31.923 job0: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:09:31.923 job1: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:09:31.923 fio-3.35 00:09:31.923 Starting 2 threads 00:09:31.923 [2024-07-25 10:10:04.834152] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:31.923 [2024-07-25 10:10:04.836780] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:34.465 10:10:07 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:34.724 [2024-07-25 10:10:07.809481] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (raid0) received event(SPDK_BDEV_EVENT_REMOVE) 00:09:34.724 [2024-07-25 10:10:07.810599] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d45 00:09:34.724 [2024-07-25 10:10:07.812268] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d45 00:09:34.724 [2024-07-25 10:10:07.813027] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d45 00:09:34.724 [2024-07-25 10:10:07.827867] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d45 00:09:34.724 [2024-07-25 10:10:07.829823] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d46 00:09:34.724 [2024-07-25 10:10:07.831114] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d46 00:09:34.724 [2024-07-25 10:10:07.833986] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d46 00:09:34.724 [2024-07-25 10:10:07.837469] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d46 00:09:34.724 [2024-07-25 10:10:07.839302] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d46 00:09:34.724 [2024-07-25 10:10:07.841158] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d46 00:09:34.724 [2024-07-25 10:10:07.843260] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d46 00:09:34.724 [2024-07-25 10:10:07.845301] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d46 00:09:34.724 [2024-07-25 10:10:07.847208] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d46 00:09:34.724 [2024-07-25 10:10:07.849324] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d46 00:09:34.724 [2024-07-25 10:10:07.850897] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d46 00:09:34.724 [2024-07-25 10:10:07.852669] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d46 00:09:34.724 [2024-07-25 10:10:07.854525] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d46 00:09:34.724 [2024-07-25 10:10:07.856744] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d46 00:09:34.724 10:10:07 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:09:34.724 10:10:07 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:34.724 [2024-07-25 10:10:07.858533] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d46 00:09:34.724 [2024-07-25 10:10:07.860314] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d46 00:09:34.724 [2024-07-25 10:10:07.862550] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d47 00:09:34.724 [2024-07-25 10:10:07.864333] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d47 00:09:34.724 [2024-07-25 10:10:07.866357] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d47 00:09:34.724 [2024-07-25 10:10:07.868528] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d47 00:09:34.724 [2024-07-25 10:10:07.870607] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d47 00:09:34.724 [2024-07-25 10:10:07.872369] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d47 00:09:34.724 [2024-07-25 10:10:07.874141] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d47 00:09:34.724 [2024-07-25 10:10:07.898588] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d47 00:09:34.724 [2024-07-25 10:10:07.900104] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d47 00:09:34.724 [2024-07-25 10:10:07.901963] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d47 00:09:34.724 [2024-07-25 10:10:07.903381] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d47 00:09:34.724 [2024-07-25 10:10:07.904814] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d47 00:09:34.724 [2024-07-25 10:10:07.906380] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d47 00:09:34.724 [2024-07-25 10:10:07.908271] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d47 00:09:34.724 [2024-07-25 10:10:07.909759] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d47 00:09:34.724 [2024-07-25 10:10:07.911469] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d47 00:09:34.982 10:10:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:09:34.982 10:10:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:34.982 fio: io_u error on file /dev/sda: Input/output error: read offset=102760448, buflen=1048576 00:09:34.982 fio: io_u error on file /dev/sda: Input/output error: write offset=125829120, buflen=1048576 00:09:35.241 10:10:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=126877696, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=127926272, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=128974848, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=130023424, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=131072000, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=132120576, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=133169152, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=0, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=103809024, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=104857600, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=122683392, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=105906176, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=106954752, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=108003328, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=123731968, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=124780544, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=1048576, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=2097152, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=3145728, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=109051904, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=4194304, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=110100480, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=111149056, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=5242880, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=6291456, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=7340032, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=8388608, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=9437184, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=112197632, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=10485760, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=113246208, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=114294784, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=11534336, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=115343360, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=12582912, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=116391936, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=117440512, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=118489088, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=13631488, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=14680064, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=15728640, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=16777216, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=119537664, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=120586240, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=121634816, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=122683392, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=17825792, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=18874368, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=19922944, buflen=1048576 00:09:35.241 fio: pid=67472, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=123731968, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=124780544, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=125829120, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=20971520, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=22020096, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=126877696, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=127926272, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=128974848, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=130023424, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=131072000, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=132120576, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=133169152, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=23068672, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=24117248, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=0, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=1048576, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=2097152, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=25165824, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=26214400, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=3145728, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: read offset=4194304, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=27262976, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=28311552, buflen=1048576 00:09:35.241 fio: io_u error on file /dev/sda: Input/output error: write offset=29360128, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=5242880, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=30408704, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=31457280, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=6291456, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=7340032, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=8388608, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=9437184, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=10485760, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=32505856, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=33554432, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=34603008, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=11534336, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=12582912, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=13631488, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=35651584, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=14680064, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=15728640, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=16777216, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=36700160, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=17825792, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=18874368, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=37748736, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=19922944, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=38797312, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=39845888, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=20971520, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=22020096, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=23068672, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=40894464, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=41943040, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=24117248, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=25165824, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=42991616, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=44040192, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=45088768, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=46137344, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=26214400, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=47185920, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=48234496, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=27262976, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=49283072, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=28311552, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=50331648, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=51380224, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=52428800, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=53477376, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=29360128, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=54525952, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=30408704, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=55574528, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=31457280, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: write offset=56623104, buflen=1048576 00:09:35.242 fio: io_u error on file /dev/sda: Input/output error: read offset=32505856, buflen=1048576 00:09:35.501 [2024-07-25 10:10:08.557152] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Malloc2) received event(SPDK_BDEV_EVENT_REMOVE) 00:09:35.501 [2024-07-25 10:10:08.561684] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df4 00:09:35.501 [2024-07-25 10:10:08.562841] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df4 00:09:35.501 [2024-07-25 10:10:08.564311] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df4 00:09:35.501 [2024-07-25 10:10:08.565435] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df4 00:09:35.501 [2024-07-25 10:10:08.566917] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df4 00:09:35.760 [2024-07-25 10:10:08.874496] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df4 00:09:35.760 [2024-07-25 10:10:08.876367] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df4 00:09:35.760 [2024-07-25 10:10:08.877656] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df4 00:09:35.760 [2024-07-25 10:10:08.878716] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df4 00:09:35.760 [2024-07-25 10:10:08.880203] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df4 00:09:35.760 [2024-07-25 10:10:08.881217] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df4 00:09:35.760 [2024-07-25 10:10:08.881292] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df4 00:09:35.760 [2024-07-25 10:10:08.881339] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df4 00:09:35.760 [2024-07-25 10:10:08.881387] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df4 00:09:35.760 [2024-07-25 10:10:08.881443] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df4 00:09:35.760 [2024-07-25 10:10:08.881487] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df5 00:09:35.760 [2024-07-25 10:10:08.881526] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df5 00:09:35.760 [2024-07-25 10:10:08.887163] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df5 00:09:35.760 [2024-07-25 10:10:08.892551] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df5 00:09:35.760 [2024-07-25 10:10:08.892609] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df5 00:09:35.760 [2024-07-25 10:10:08.892654] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df5 00:09:35.760 [2024-07-25 10:10:08.892696] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df5 00:09:35.760 10:10:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@131 -- # fio_status=0 00:09:35.760 [2024-07-25 10:10:08.897314] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df5 00:09:35.760 10:10:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # wait 67441 00:09:35.760 [2024-07-25 10:10:08.898663] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df5 00:09:35.760 [2024-07-25 10:10:08.900101] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df5 00:09:35.760 [2024-07-25 10:10:08.901116] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df5 00:09:35.760 [2024-07-25 10:10:08.902286] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df5 00:09:35.760 [2024-07-25 10:10:08.903339] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df5 00:09:35.760 [2024-07-25 10:10:08.904641] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df5 00:09:35.760 [2024-07-25 10:10:08.905677] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df5 00:09:35.760 [2024-07-25 10:10:08.906880] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df5 00:09:35.760 [2024-07-25 10:10:08.907986] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df6 00:09:35.760 [2024-07-25 10:10:08.909298] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df6 00:09:35.760 [2024-07-25 10:10:08.910290] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df6 00:09:35.760 [2024-07-25 10:10:08.911382] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df6 00:09:35.760 [2024-07-25 10:10:08.912657] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df6 00:09:35.760 [2024-07-25 10:10:08.913626] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df6 00:09:35.760 [2024-07-25 10:10:08.914803] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df6 00:09:35.760 [2024-07-25 10:10:08.915894] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df6 00:09:35.760 [2024-07-25 10:10:08.917206] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df6 00:09:35.760 [2024-07-25 10:10:08.918214] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df6 00:09:35.760 [2024-07-25 10:10:08.919557] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df6 00:09:35.760 [2024-07-25 10:10:08.920600] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df6 00:09:35.760 [2024-07-25 10:10:08.921914] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df6 00:09:35.760 [2024-07-25 10:10:08.922977] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df6 00:09:35.760 [2024-07-25 10:10:08.924163] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df6 00:09:35.760 [2024-07-25 10:10:08.925428] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df6 00:09:35.760 [2024-07-25 10:10:08.926474] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df7 00:09:35.760 [2024-07-25 10:10:08.927567] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df7 00:09:35.760 [2024-07-25 10:10:08.928992] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df7 00:09:35.760 [2024-07-25 10:10:08.930131] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df7 00:09:35.760 [2024-07-25 10:10:08.931557] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df7 00:09:35.760 [2024-07-25 10:10:08.932593] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df7 00:09:35.760 [2024-07-25 10:10:08.933702] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df7 00:09:35.760 [2024-07-25 10:10:08.934780] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df7 00:09:35.760 [2024-07-25 10:10:08.936159] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df7 00:09:35.760 [2024-07-25 10:10:08.937199] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df7 00:09:35.760 [2024-07-25 10:10:08.938471] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df7 00:09:35.760 [2024-07-25 10:10:08.939601] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df7 00:09:35.760 [2024-07-25 10:10:08.940575] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df7 00:09:35.760 [2024-07-25 10:10:08.941901] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df7 00:09:35.760 [2024-07-25 10:10:08.943054] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df7 00:09:35.760 [2024-07-25 10:10:08.944196] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=df7 00:09:35.760 fio: io_u error on file /dev/sdb: Input/output error: read offset=682622976, buflen=1048576 00:09:35.760 fio: io_u error on file /dev/sdb: Input/output error: read offset=683671552, buflen=1048576 00:09:35.760 fio: io_u error on file /dev/sdb: Input/output error: read offset=684720128, buflen=1048576 00:09:35.760 fio: io_u error on file /dev/sdb: Input/output error: read offset=685768704, buflen=1048576 00:09:35.760 fio: io_u error on file /dev/sdb: Input/output error: read offset=686817280, buflen=1048576 00:09:35.760 fio: io_u error on file /dev/sdb: Input/output error: read offset=687865856, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=688914432, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=689963008, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=691011584, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=692060160, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=693108736, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=694157312, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=695205888, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=696254464, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=697303040, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=751828992, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=698351616, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=699400192, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=700448768, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=701497344, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=702545920, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=752877568, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=703594496, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=753926144, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=754974720, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=756023296, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=704643072, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=757071872, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=739246080, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=740294656, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=741343232, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=742391808, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=743440384, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=744488960, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=745537536, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=746586112, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=747634688, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=748683264, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=749731840, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=750780416, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=735051776, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=736100352, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=737148928, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=738197504, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=705691648, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=758120448, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=759169024, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=760217600, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=761266176, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=706740224, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=762314752, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=707788800, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=763363328, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=764411904, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=708837376, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=709885952, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=765460480, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=710934528, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=711983104, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=713031680, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=714080256, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=766509056, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=715128832, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=716177408, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=767557632, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=768606208, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=717225984, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=769654784, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=770703360, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=771751936, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=772800512, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=773849088, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=774897664, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=718274560, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=775946240, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=776994816, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=719323136, buflen=1048576 00:09:35.761 fio: pid=67473, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=720371712, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=721420288, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=778043392, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=722468864, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=723517440, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=779091968, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=780140544, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=724566016, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=781189120, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=782237696, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=725614592, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=726663168, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=727711744, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=783286272, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=728760320, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=784334848, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=734003200, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=729808896, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=730857472, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=787480576, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=785383424, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=731906048, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=788529152, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=732954624, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=786432000, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=789577728, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=790626304, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=791674880, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=792723456, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=735051776, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=793772032, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=794820608, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=736100352, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=737148928, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=738197504, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=739246080, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=795869184, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=740294656, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=741343232, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=742391808, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=796917760, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=743440384, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=744488960, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=745537536, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: write offset=797966336, buflen=1048576 00:09:35.761 fio: io_u error on file /dev/sdb: Input/output error: read offset=746586112, buflen=1048576 00:09:35.762 fio: io_u error on file /dev/sdb: Input/output error: read offset=747634688, buflen=1048576 00:09:35.762 fio: io_u error on file /dev/sdb: Input/output error: read offset=748683264, buflen=1048576 00:09:35.762 fio: io_u error on file /dev/sdb: Input/output error: read offset=749731840, buflen=1048576 00:09:35.762 fio: io_u error on file /dev/sdb: Input/output error: write offset=799014912, buflen=1048576 00:09:35.762 fio: io_u error on file /dev/sdb: Input/output error: write offset=800063488, buflen=1048576 00:09:35.762 00:09:35.762 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=67472: Thu Jul 25 10:10:08 2024 00:09:35.762 read: IOPS=84, BW=66.6MiB/s (69.9MB/s)(226MiB/3392msec) 00:09:35.762 slat (usec): min=28, max=89271, avg=4513.48, stdev=9215.43 00:09:35.762 clat (msec): min=350, max=1095, avg=686.03, stdev=161.18 00:09:35.762 lat (msec): min=350, max=1100, avg=690.65, stdev=162.03 00:09:35.762 clat percentiles (msec): 00:09:35.762 | 1.00th=[ 384], 5.00th=[ 409], 10.00th=[ 472], 20.00th=[ 514], 00:09:35.762 | 30.00th=[ 617], 40.00th=[ 651], 50.00th=[ 701], 60.00th=[ 735], 00:09:35.762 | 70.00th=[ 768], 80.00th=[ 810], 90.00th=[ 894], 95.00th=[ 969], 00:09:35.762 | 99.00th=[ 1070], 99.50th=[ 1083], 99.90th=[ 1099], 99.95th=[ 1099], 00:09:35.762 | 99.99th=[ 1099] 00:09:35.762 bw ( KiB/s): min=32768, max=106496, per=31.50%, avg=73388.00, stdev=25167.37, samples=6 00:09:35.762 iops : min= 32, max= 104, avg=71.50, stdev=24.66, samples=6 00:09:35.762 write: IOPS=91, BW=72.2MiB/s (75.7MB/s)(245MiB/3392msec); 0 zone resets 00:09:35.762 slat (usec): min=77, max=211879, avg=5779.03, stdev=15089.93 00:09:35.762 clat (msec): min=411, max=1100, avg=754.51, stdev=160.19 00:09:35.762 lat (msec): min=424, max=1115, avg=760.48, stdev=160.42 00:09:35.762 clat percentiles (msec): 00:09:35.762 | 1.00th=[ 422], 5.00th=[ 439], 10.00th=[ 531], 20.00th=[ 634], 00:09:35.762 | 30.00th=[ 684], 40.00th=[ 743], 50.00th=[ 760], 60.00th=[ 768], 00:09:35.762 | 70.00th=[ 810], 80.00th=[ 877], 90.00th=[ 961], 95.00th=[ 1083], 00:09:35.762 | 99.00th=[ 1099], 99.50th=[ 1099], 99.90th=[ 1099], 99.95th=[ 1099], 00:09:35.762 | 99.99th=[ 1099] 00:09:35.762 bw ( KiB/s): min=16384, max=122880, per=31.51%, avg=79174.17, stdev=39908.02, samples=6 00:09:35.762 iops : min= 16, max= 120, avg=77.17, stdev=38.93, samples=6 00:09:35.762 lat (msec) : 500=8.68%, 750=34.72%, 1000=29.72%, 2000=5.51% 00:09:35.762 cpu : usr=0.74%, sys=1.24%, ctx=442, majf=0, minf=2 00:09:35.762 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.3%, >=64=89.5% 00:09:35.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.762 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:09:35.762 issued rwts: total=288,311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.762 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=67473: Thu Jul 25 10:10:08 2024 00:09:35.762 read: IOPS=185, BW=169MiB/s (177MB/s)(651MiB/3855msec) 00:09:35.762 slat (usec): min=25, max=217102, avg=2601.89, stdev=9745.40 00:09:35.762 clat (msec): min=109, max=647, avg=301.89, stdev=107.70 00:09:35.762 lat (msec): min=121, max=666, avg=304.63, stdev=108.59 00:09:35.762 clat percentiles (msec): 00:09:35.762 | 1.00th=[ 132], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 220], 00:09:35.762 | 30.00th=[ 232], 40.00th=[ 262], 50.00th=[ 288], 60.00th=[ 300], 00:09:35.762 | 70.00th=[ 317], 80.00th=[ 359], 90.00th=[ 460], 95.00th=[ 558], 00:09:35.762 | 99.00th=[ 617], 99.50th=[ 634], 99.90th=[ 651], 99.95th=[ 651], 00:09:35.762 | 99.99th=[ 651] 00:09:35.762 bw ( KiB/s): min=77824, max=288768, per=80.46%, avg=187440.14, stdev=78048.82, samples=7 00:09:35.762 iops : min= 76, max= 282, avg=182.86, stdev=76.29, samples=7 00:09:35.762 write: IOPS=198, BW=182MiB/s (191MB/s)(701MiB/3855msec); 0 zone resets 00:09:35.762 slat (usec): min=66, max=339047, avg=2595.38, stdev=13538.17 00:09:35.762 clat (msec): min=188, max=692, avg=350.03, stdev=116.42 00:09:35.762 lat (msec): min=188, max=692, avg=352.28, stdev=116.75 00:09:35.762 clat percentiles (msec): 00:09:35.762 | 1.00th=[ 215], 5.00th=[ 228], 10.00th=[ 234], 20.00th=[ 255], 00:09:35.762 | 30.00th=[ 271], 40.00th=[ 284], 50.00th=[ 321], 60.00th=[ 355], 00:09:35.762 | 70.00th=[ 380], 80.00th=[ 426], 90.00th=[ 567], 95.00th=[ 625], 00:09:35.762 | 99.00th=[ 659], 99.50th=[ 667], 99.90th=[ 693], 99.95th=[ 693], 00:09:35.762 | 99.99th=[ 693] 00:09:35.762 bw ( KiB/s): min=89932, max=319488, per=79.37%, avg=199456.00, stdev=88815.42, samples=7 00:09:35.762 iops : min= 87, max= 312, avg=194.57, stdev=86.92, samples=7 00:09:35.762 lat (msec) : 250=24.26%, 500=57.50%, 750=9.59% 00:09:35.762 cpu : usr=1.48%, sys=2.57%, ctx=557, majf=0, minf=1 00:09:35.762 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:09:35.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.762 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.762 issued rwts: total=716,764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.762 00:09:35.762 Run status group 0 (all jobs): 00:09:35.762 READ: bw=227MiB/s (239MB/s), 66.6MiB/s-169MiB/s (69.9MB/s-177MB/s), io=877MiB (920MB), run=3392-3855msec 00:09:35.762 WRITE: bw=245MiB/s (257MB/s), 72.2MiB/s-182MiB/s (75.7MB/s-191MB/s), io=946MiB (992MB), run=3392-3855msec 00:09:35.762 00:09:35.762 Disk stats (read/write): 00:09:35.762 sda: ios=317/290, merge=0/0, ticks=74671/95800, in_queue=170470, util=80.43% 00:09:35.762 sdb: ios=721/720, merge=0/0, ticks=74386/113383, in_queue=187769, util=91.55% 00:09:36.020 iscsi hotplug test: fio failed as expected 00:09:36.020 Cleaning up iSCSI connection 00:09:36.020 10:10:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # fio_status=2 00:09:36.020 10:10:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@134 -- # '[' 2 -eq 0 ']' 00:09:36.020 10:10:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@138 -- # echo 'iscsi hotplug test: fio failed as expected' 00:09:36.020 10:10:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@141 -- # iscsicleanup 00:09:36.020 10:10:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:09:36.020 10:10:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:09:36.020 Logging out of session [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:09:36.020 Logout of [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:09:36.020 10:10:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:09:36.020 10:10:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@983 -- # rm -rf 00:09:36.020 10:10:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2016-06.io.spdk:Target3 00:09:36.278 10:10:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@144 -- # delete_tmp_files 00:09:36.278 10:10:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@14 -- # rm -f /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/iscsi2.json 00:09:36.278 10:10:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@15 -- # rm -f ./local-job0-0-verify.state 00:09:36.278 10:10:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@16 -- # rm -f ./local-job1-1-verify.state 00:09:36.278 10:10:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:09:36.278 10:10:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@148 -- # killprocess 66936 00:09:36.278 10:10:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@948 -- # '[' -z 66936 ']' 00:09:36.278 10:10:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@952 -- # kill -0 66936 00:09:36.278 10:10:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@953 -- # uname 00:09:36.278 10:10:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:36.278 10:10:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66936 00:09:36.278 killing process with pid 66936 00:09:36.278 10:10:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:36.279 10:10:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:36.279 10:10:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66936' 00:09:36.279 10:10:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@967 -- # kill 66936 00:09:36.279 10:10:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@972 -- # wait 66936 00:09:36.536 10:10:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@150 -- # iscsitestfini 00:09:36.536 10:10:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:09:36.536 00:09:36.536 real 0m28.719s 00:09:36.536 user 0m27.674s 00:09:36.536 sys 0m6.215s 00:09:36.536 10:10:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.536 10:10:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:09:36.536 ************************************ 00:09:36.536 END TEST iscsi_tgt_fio 00:09:36.536 ************************************ 00:09:36.793 10:10:09 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:09:36.793 10:10:09 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@38 -- # run_test iscsi_tgt_qos /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:09:36.793 10:10:09 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:36.793 10:10:09 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.793 10:10:09 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:09:36.793 ************************************ 00:09:36.793 START TEST iscsi_tgt_qos 00:09:36.793 ************************************ 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:09:36.793 * Looking for test storage... 00:09:36.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:09:36.793 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@11 -- # iscsitestinit 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@44 -- # '[' -z 10.0.0.1 ']' 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@49 -- # '[' -z 10.0.0.2 ']' 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@54 -- # MALLOC_BDEV_SIZE=64 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@55 -- # MALLOC_BLOCK_SIZE=512 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@56 -- # IOPS_RESULT= 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@57 -- # BANDWIDTH_RESULT= 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@58 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@60 -- # timing_enter start_iscsi_tgt 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@63 -- # pid=67626 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@64 -- # echo 'Process pid: 67626' 00:09:36.794 Process pid: 67626 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@65 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@62 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@66 -- # waitforlisten 67626 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@829 -- # '[' -z 67626 ']' 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.794 10:10:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:36.794 [2024-07-25 10:10:09.969950] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:09:36.794 [2024-07-25 10:10:09.970041] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67626 ] 00:09:37.052 [2024-07-25 10:10:10.106050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.052 [2024-07-25 10:10:10.205833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@862 -- # return 0 00:09:37.986 iscsi_tgt is listening. Running tests... 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@67 -- # echo 'iscsi_tgt is listening. Running tests...' 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@69 -- # timing_exit start_iscsi_tgt 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@71 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@72 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@73 -- # rpc_cmd bdev_malloc_create 64 512 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.986 10:10:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:37.986 Malloc0 00:09:37.986 10:10:11 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.986 10:10:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@78 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias Malloc0:0 1:2 64 -d 00:09:37.986 10:10:11 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.986 10:10:11 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:37.986 10:10:11 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.986 10:10:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@79 -- # sleep 1 00:09:38.922 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@81 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:09:38.922 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:09:38.922 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@82 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:09:38.922 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:09:38.922 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:09:38.922 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@84 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:09:38.922 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@87 -- # run_fio Malloc0 00:09:38.922 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:09:38.922 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:09:38.922 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:09:38.922 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:09:38.922 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:09:38.922 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:09:38.922 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:09:38.922 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:09:38.922 10:10:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.922 10:10:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:38.922 [2024-07-25 10:10:12.069876] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.922 10:10:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.922 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:09:38.922 "tick_rate": 2100000000, 00:09:38.922 "ticks": 1064755956940, 00:09:38.922 "bdevs": [ 00:09:38.922 { 00:09:38.922 "name": "Malloc0", 00:09:38.922 "bytes_read": 41472, 00:09:38.922 "num_read_ops": 4, 00:09:38.922 "bytes_written": 0, 00:09:38.922 "num_write_ops": 0, 00:09:38.922 "bytes_unmapped": 0, 00:09:38.922 "num_unmap_ops": 0, 00:09:38.922 "bytes_copied": 0, 00:09:38.922 "num_copy_ops": 0, 00:09:38.922 "read_latency_ticks": 830410, 00:09:38.922 "max_read_latency_ticks": 346748, 00:09:38.922 "min_read_latency_ticks": 20450, 00:09:38.923 "write_latency_ticks": 0, 00:09:38.923 "max_write_latency_ticks": 0, 00:09:38.923 "min_write_latency_ticks": 0, 00:09:38.923 "unmap_latency_ticks": 0, 00:09:38.923 "max_unmap_latency_ticks": 0, 00:09:38.923 "min_unmap_latency_ticks": 0, 00:09:38.923 "copy_latency_ticks": 0, 00:09:38.923 "max_copy_latency_ticks": 0, 00:09:38.923 "min_copy_latency_ticks": 0, 00:09:38.923 "io_error": {} 00:09:38.923 } 00:09:38.923 ] 00:09:38.923 }' 00:09:38.923 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:09:38.923 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=4 00:09:38.923 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:09:38.923 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=41472 00:09:38.923 10:10:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:09:39.181 [global] 00:09:39.181 thread=1 00:09:39.181 invalidate=1 00:09:39.181 rw=randread 00:09:39.181 time_based=1 00:09:39.181 runtime=5 00:09:39.181 ioengine=libaio 00:09:39.181 direct=1 00:09:39.181 bs=1024 00:09:39.181 iodepth=128 00:09:39.181 norandommap=1 00:09:39.181 numjobs=1 00:09:39.181 00:09:39.181 [job0] 00:09:39.181 filename=/dev/sda 00:09:39.181 queue_depth set to 113 (sda) 00:09:39.181 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:09:39.181 fio-3.35 00:09:39.181 Starting 1 thread 00:09:44.450 00:09:44.450 job0: (groupid=0, jobs=1): err= 0: pid=67713: Thu Jul 25 10:10:17 2024 00:09:44.450 read: IOPS=51.1k, BW=49.9MiB/s (52.3MB/s)(250MiB/5003msec) 00:09:44.450 slat (nsec): min=1957, max=8224.3k, avg=17958.73, stdev=54365.85 00:09:44.450 clat (usec): min=984, max=10765, avg=2486.39, stdev=230.58 00:09:44.450 lat (usec): min=998, max=10791, avg=2504.35, stdev=225.87 00:09:44.450 clat percentiles (usec): 00:09:44.450 | 1.00th=[ 2147], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2409], 00:09:44.450 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:44.450 | 70.00th=[ 2540], 80.00th=[ 2606], 90.00th=[ 2638], 95.00th=[ 2704], 00:09:44.450 | 99.00th=[ 2835], 99.50th=[ 2966], 99.90th=[ 3916], 99.95th=[ 4555], 00:09:44.450 | 99.99th=[10552] 00:09:44.450 bw ( KiB/s): min=50240, max=52192, per=100.00%, avg=51469.78, stdev=643.65, samples=9 00:09:44.450 iops : min=50240, max=52192, avg=51469.78, stdev=643.65, samples=9 00:09:44.450 lat (usec) : 1000=0.01% 00:09:44.450 lat (msec) : 2=0.41%, 4=99.50%, 10=0.04%, 20=0.05% 00:09:44.450 cpu : usr=9.10%, sys=18.89%, ctx=151781, majf=0, minf=32 00:09:44.450 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:09:44.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.450 issued rwts: total=255577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.450 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.450 00:09:44.450 Run status group 0 (all jobs): 00:09:44.450 READ: bw=49.9MiB/s (52.3MB/s), 49.9MiB/s-49.9MiB/s (52.3MB/s-52.3MB/s), io=250MiB (262MB), run=5003-5003msec 00:09:44.450 00:09:44.450 Disk stats (read/write): 00:09:44.450 sda: ios=249911/0, merge=0/0, ticks=525669/0, in_queue=525669, util=98.05% 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:09:44.450 "tick_rate": 2100000000, 00:09:44.450 "ticks": 1076191251410, 00:09:44.450 "bdevs": [ 00:09:44.450 { 00:09:44.450 "name": "Malloc0", 00:09:44.450 "bytes_read": 262821376, 00:09:44.450 "num_read_ops": 255634, 00:09:44.450 "bytes_written": 0, 00:09:44.450 "num_write_ops": 0, 00:09:44.450 "bytes_unmapped": 0, 00:09:44.450 "num_unmap_ops": 0, 00:09:44.450 "bytes_copied": 0, 00:09:44.450 "num_copy_ops": 0, 00:09:44.450 "read_latency_ticks": 50975591718, 00:09:44.450 "max_read_latency_ticks": 456098, 00:09:44.450 "min_read_latency_ticks": 8652, 00:09:44.450 "write_latency_ticks": 0, 00:09:44.450 "max_write_latency_ticks": 0, 00:09:44.450 "min_write_latency_ticks": 0, 00:09:44.450 "unmap_latency_ticks": 0, 00:09:44.450 "max_unmap_latency_ticks": 0, 00:09:44.450 "min_unmap_latency_ticks": 0, 00:09:44.450 "copy_latency_ticks": 0, 00:09:44.450 "max_copy_latency_ticks": 0, 00:09:44.450 "min_copy_latency_ticks": 0, 00:09:44.450 "io_error": {} 00:09:44.450 } 00:09:44.450 ] 00:09:44.450 }' 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=255634 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=262821376 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=51126 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=52555980 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@90 -- # IOPS_LIMIT=25563 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@91 -- # BANDWIDTH_LIMIT=26277990 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@94 -- # READ_BANDWIDTH_LIMIT=13138995 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@98 -- # IOPS_LIMIT=25000 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@99 -- # BANDWIDTH_LIMIT_MB=25 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@100 -- # BANDWIDTH_LIMIT=26214400 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@101 -- # READ_BANDWIDTH_LIMIT_MB=12 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@102 -- # READ_BANDWIDTH_LIMIT=12582912 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@105 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 25000 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@106 -- # run_fio Malloc0 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.450 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:09:44.450 "tick_rate": 2100000000, 00:09:44.450 "ticks": 1076476862050, 00:09:44.450 "bdevs": [ 00:09:44.450 { 00:09:44.450 "name": "Malloc0", 00:09:44.450 "bytes_read": 262821376, 00:09:44.450 "num_read_ops": 255634, 00:09:44.450 "bytes_written": 0, 00:09:44.451 "num_write_ops": 0, 00:09:44.451 "bytes_unmapped": 0, 00:09:44.451 "num_unmap_ops": 0, 00:09:44.451 "bytes_copied": 0, 00:09:44.451 "num_copy_ops": 0, 00:09:44.451 "read_latency_ticks": 50975591718, 00:09:44.451 "max_read_latency_ticks": 456098, 00:09:44.451 "min_read_latency_ticks": 8652, 00:09:44.451 "write_latency_ticks": 0, 00:09:44.451 "max_write_latency_ticks": 0, 00:09:44.451 "min_write_latency_ticks": 0, 00:09:44.451 "unmap_latency_ticks": 0, 00:09:44.451 "max_unmap_latency_ticks": 0, 00:09:44.451 "min_unmap_latency_ticks": 0, 00:09:44.451 "copy_latency_ticks": 0, 00:09:44.451 "max_copy_latency_ticks": 0, 00:09:44.451 "min_copy_latency_ticks": 0, 00:09:44.451 "io_error": {} 00:09:44.451 } 00:09:44.451 ] 00:09:44.451 }' 00:09:44.451 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:09:44.709 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=255634 00:09:44.709 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:09:44.709 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=262821376 00:09:44.709 10:10:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:09:44.709 [global] 00:09:44.709 thread=1 00:09:44.709 invalidate=1 00:09:44.709 rw=randread 00:09:44.709 time_based=1 00:09:44.709 runtime=5 00:09:44.709 ioengine=libaio 00:09:44.709 direct=1 00:09:44.709 bs=1024 00:09:44.709 iodepth=128 00:09:44.709 norandommap=1 00:09:44.709 numjobs=1 00:09:44.709 00:09:44.709 [job0] 00:09:44.709 filename=/dev/sda 00:09:44.709 queue_depth set to 113 (sda) 00:09:44.709 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:09:44.709 fio-3.35 00:09:44.709 Starting 1 thread 00:09:49.973 00:09:49.973 job0: (groupid=0, jobs=1): err= 0: pid=67799: Thu Jul 25 10:10:23 2024 00:09:49.973 read: IOPS=25.0k, BW=24.4MiB/s (25.6MB/s)(122MiB/5005msec) 00:09:49.973 slat (usec): min=2, max=1506, avg=37.55, stdev=148.75 00:09:49.973 clat (usec): min=2441, max=9715, avg=5082.73, stdev=248.49 00:09:49.973 lat (usec): min=2462, max=9718, avg=5120.29, stdev=283.15 00:09:49.973 clat percentiles (usec): 00:09:49.973 | 1.00th=[ 4490], 5.00th=[ 4883], 10.00th=[ 4948], 20.00th=[ 5014], 00:09:49.973 | 30.00th=[ 5014], 40.00th=[ 5014], 50.00th=[ 5014], 60.00th=[ 5014], 00:09:49.973 | 70.00th=[ 5080], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5735], 00:09:49.973 | 99.00th=[ 5866], 99.50th=[ 5932], 99.90th=[ 6128], 99.95th=[ 6849], 00:09:49.973 | 99.99th=[ 8717] 00:09:49.973 bw ( KiB/s): min=24972, max=25070, per=100.00%, avg=25022.22, stdev=33.58, samples=9 00:09:49.974 iops : min=24972, max=25070, avg=25022.22, stdev=33.58, samples=9 00:09:49.974 lat (msec) : 4=0.13%, 10=99.87% 00:09:49.974 cpu : usr=6.39%, sys=13.67%, ctx=67883, majf=0, minf=32 00:09:49.974 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:49.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.974 issued rwts: total=125078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.974 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.974 00:09:49.974 Run status group 0 (all jobs): 00:09:49.974 READ: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=122MiB (128MB), run=5005-5005msec 00:09:49.974 00:09:49.974 Disk stats (read/write): 00:09:49.974 sda: ios=122189/0, merge=0/0, ticks=530289/0, in_queue=530289, util=98.17% 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:09:49.974 "tick_rate": 2100000000, 00:09:49.974 "ticks": 1087915274300, 00:09:49.974 "bdevs": [ 00:09:49.974 { 00:09:49.974 "name": "Malloc0", 00:09:49.974 "bytes_read": 390901248, 00:09:49.974 "num_read_ops": 380712, 00:09:49.974 "bytes_written": 0, 00:09:49.974 "num_write_ops": 0, 00:09:49.974 "bytes_unmapped": 0, 00:09:49.974 "num_unmap_ops": 0, 00:09:49.974 "bytes_copied": 0, 00:09:49.974 "num_copy_ops": 0, 00:09:49.974 "read_latency_ticks": 592533906050, 00:09:49.974 "max_read_latency_ticks": 5692230, 00:09:49.974 "min_read_latency_ticks": 8652, 00:09:49.974 "write_latency_ticks": 0, 00:09:49.974 "max_write_latency_ticks": 0, 00:09:49.974 "min_write_latency_ticks": 0, 00:09:49.974 "unmap_latency_ticks": 0, 00:09:49.974 "max_unmap_latency_ticks": 0, 00:09:49.974 "min_unmap_latency_ticks": 0, 00:09:49.974 "copy_latency_ticks": 0, 00:09:49.974 "max_copy_latency_ticks": 0, 00:09:49.974 "min_copy_latency_ticks": 0, 00:09:49.974 "io_error": {} 00:09:49.974 } 00:09:49.974 ] 00:09:49.974 }' 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=380712 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=390901248 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=25015 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=25615974 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@107 -- # verify_qos_limits 25015 25000 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=25015 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=25000 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@110 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@111 -- # run_fio Malloc0 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:09:49.974 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:09:50.231 10:10:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.231 10:10:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:50.231 10:10:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.231 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:09:50.231 "tick_rate": 2100000000, 00:09:50.231 "ticks": 1088200919046, 00:09:50.231 "bdevs": [ 00:09:50.231 { 00:09:50.231 "name": "Malloc0", 00:09:50.231 "bytes_read": 390901248, 00:09:50.231 "num_read_ops": 380712, 00:09:50.231 "bytes_written": 0, 00:09:50.231 "num_write_ops": 0, 00:09:50.231 "bytes_unmapped": 0, 00:09:50.231 "num_unmap_ops": 0, 00:09:50.231 "bytes_copied": 0, 00:09:50.231 "num_copy_ops": 0, 00:09:50.231 "read_latency_ticks": 592533906050, 00:09:50.231 "max_read_latency_ticks": 5692230, 00:09:50.231 "min_read_latency_ticks": 8652, 00:09:50.231 "write_latency_ticks": 0, 00:09:50.231 "max_write_latency_ticks": 0, 00:09:50.231 "min_write_latency_ticks": 0, 00:09:50.231 "unmap_latency_ticks": 0, 00:09:50.231 "max_unmap_latency_ticks": 0, 00:09:50.231 "min_unmap_latency_ticks": 0, 00:09:50.231 "copy_latency_ticks": 0, 00:09:50.231 "max_copy_latency_ticks": 0, 00:09:50.231 "min_copy_latency_ticks": 0, 00:09:50.231 "io_error": {} 00:09:50.231 } 00:09:50.231 ] 00:09:50.231 }' 00:09:50.231 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:09:50.231 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=380712 00:09:50.231 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:09:50.231 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=390901248 00:09:50.231 10:10:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:09:50.231 [global] 00:09:50.231 thread=1 00:09:50.231 invalidate=1 00:09:50.231 rw=randread 00:09:50.231 time_based=1 00:09:50.231 runtime=5 00:09:50.231 ioengine=libaio 00:09:50.231 direct=1 00:09:50.231 bs=1024 00:09:50.231 iodepth=128 00:09:50.231 norandommap=1 00:09:50.231 numjobs=1 00:09:50.231 00:09:50.231 [job0] 00:09:50.231 filename=/dev/sda 00:09:50.231 queue_depth set to 113 (sda) 00:09:50.488 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:09:50.488 fio-3.35 00:09:50.488 Starting 1 thread 00:09:55.757 00:09:55.757 job0: (groupid=0, jobs=1): err= 0: pid=67898: Thu Jul 25 10:10:28 2024 00:09:55.757 read: IOPS=48.6k, BW=47.5MiB/s (49.8MB/s)(238MiB/5003msec) 00:09:55.757 slat (nsec): min=1896, max=450682, avg=19213.83, stdev=59167.32 00:09:55.757 clat (usec): min=790, max=4341, avg=2612.55, stdev=144.78 00:09:55.757 lat (usec): min=794, max=4343, avg=2631.76, stdev=133.28 00:09:55.757 clat percentiles (usec): 00:09:55.757 | 1.00th=[ 2245], 5.00th=[ 2409], 10.00th=[ 2442], 20.00th=[ 2474], 00:09:55.758 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2671], 00:09:55.758 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2769], 95.00th=[ 2835], 00:09:55.758 | 99.00th=[ 2900], 99.50th=[ 2900], 99.90th=[ 2966], 99.95th=[ 2966], 00:09:55.758 | 99.99th=[ 3982] 00:09:55.758 bw ( KiB/s): min=47014, max=51904, per=99.32%, avg=48288.00, stdev=1712.92, samples=9 00:09:55.758 iops : min=47014, max=51904, avg=48287.78, stdev=1713.08, samples=9 00:09:55.758 lat (usec) : 1000=0.01% 00:09:55.758 lat (msec) : 2=0.03%, 4=99.95%, 10=0.01% 00:09:55.758 cpu : usr=6.76%, sys=15.95%, ctx=140134, majf=0, minf=32 00:09:55.758 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:09:55.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.758 issued rwts: total=243232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.758 00:09:55.758 Run status group 0 (all jobs): 00:09:55.758 READ: bw=47.5MiB/s (49.8MB/s), 47.5MiB/s-47.5MiB/s (49.8MB/s-49.8MB/s), io=238MiB (249MB), run=5003-5003msec 00:09:55.758 00:09:55.758 Disk stats (read/write): 00:09:55.758 sda: ios=237246/0, merge=0/0, ticks=536129/0, in_queue=536129, util=98.13% 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:09:55.758 "tick_rate": 2100000000, 00:09:55.758 "ticks": 1099616023772, 00:09:55.758 "bdevs": [ 00:09:55.758 { 00:09:55.758 "name": "Malloc0", 00:09:55.758 "bytes_read": 639970816, 00:09:55.758 "num_read_ops": 623944, 00:09:55.758 "bytes_written": 0, 00:09:55.758 "num_write_ops": 0, 00:09:55.758 "bytes_unmapped": 0, 00:09:55.758 "num_unmap_ops": 0, 00:09:55.758 "bytes_copied": 0, 00:09:55.758 "num_copy_ops": 0, 00:09:55.758 "read_latency_ticks": 643217344602, 00:09:55.758 "max_read_latency_ticks": 5692230, 00:09:55.758 "min_read_latency_ticks": 8596, 00:09:55.758 "write_latency_ticks": 0, 00:09:55.758 "max_write_latency_ticks": 0, 00:09:55.758 "min_write_latency_ticks": 0, 00:09:55.758 "unmap_latency_ticks": 0, 00:09:55.758 "max_unmap_latency_ticks": 0, 00:09:55.758 "min_unmap_latency_ticks": 0, 00:09:55.758 "copy_latency_ticks": 0, 00:09:55.758 "max_copy_latency_ticks": 0, 00:09:55.758 "min_copy_latency_ticks": 0, 00:09:55.758 "io_error": {} 00:09:55.758 } 00:09:55.758 ] 00:09:55.758 }' 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=623944 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=639970816 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=48646 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=49813913 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@112 -- # '[' 48646 -gt 25000 ']' 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@115 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 25000 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@116 -- # run_fio Malloc0 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:09:55.758 "tick_rate": 2100000000, 00:09:55.758 "ticks": 1099876410548, 00:09:55.758 "bdevs": [ 00:09:55.758 { 00:09:55.758 "name": "Malloc0", 00:09:55.758 "bytes_read": 639970816, 00:09:55.758 "num_read_ops": 623944, 00:09:55.758 "bytes_written": 0, 00:09:55.758 "num_write_ops": 0, 00:09:55.758 "bytes_unmapped": 0, 00:09:55.758 "num_unmap_ops": 0, 00:09:55.758 "bytes_copied": 0, 00:09:55.758 "num_copy_ops": 0, 00:09:55.758 "read_latency_ticks": 643217344602, 00:09:55.758 "max_read_latency_ticks": 5692230, 00:09:55.758 "min_read_latency_ticks": 8596, 00:09:55.758 "write_latency_ticks": 0, 00:09:55.758 "max_write_latency_ticks": 0, 00:09:55.758 "min_write_latency_ticks": 0, 00:09:55.758 "unmap_latency_ticks": 0, 00:09:55.758 "max_unmap_latency_ticks": 0, 00:09:55.758 "min_unmap_latency_ticks": 0, 00:09:55.758 "copy_latency_ticks": 0, 00:09:55.758 "max_copy_latency_ticks": 0, 00:09:55.758 "min_copy_latency_ticks": 0, 00:09:55.758 "io_error": {} 00:09:55.758 } 00:09:55.758 ] 00:09:55.758 }' 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=623944 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=639970816 00:09:55.758 10:10:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:09:55.758 [global] 00:09:55.758 thread=1 00:09:55.758 invalidate=1 00:09:55.758 rw=randread 00:09:55.758 time_based=1 00:09:55.758 runtime=5 00:09:55.758 ioengine=libaio 00:09:55.758 direct=1 00:09:55.758 bs=1024 00:09:55.758 iodepth=128 00:09:55.758 norandommap=1 00:09:55.758 numjobs=1 00:09:55.758 00:09:55.758 [job0] 00:09:55.758 filename=/dev/sda 00:09:55.758 queue_depth set to 113 (sda) 00:09:56.017 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:09:56.017 fio-3.35 00:09:56.017 Starting 1 thread 00:10:01.291 00:10:01.291 job0: (groupid=0, jobs=1): err= 0: pid=67984: Thu Jul 25 10:10:34 2024 00:10:01.291 read: IOPS=25.0k, BW=24.4MiB/s (25.6MB/s)(122MiB/5005msec) 00:10:01.291 slat (usec): min=3, max=1036, avg=37.32, stdev=143.53 00:10:01.291 clat (usec): min=2516, max=9548, avg=5081.90, stdev=241.99 00:10:01.291 lat (usec): min=2544, max=9559, avg=5119.22, stdev=275.20 00:10:01.291 clat percentiles (usec): 00:10:01.291 | 1.00th=[ 4555], 5.00th=[ 4883], 10.00th=[ 4948], 20.00th=[ 5014], 00:10:01.291 | 30.00th=[ 5014], 40.00th=[ 5014], 50.00th=[ 5014], 60.00th=[ 5014], 00:10:01.291 | 70.00th=[ 5080], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5669], 00:10:01.291 | 99.00th=[ 5866], 99.50th=[ 5866], 99.90th=[ 5932], 99.95th=[ 6783], 00:10:01.291 | 99.99th=[ 8848] 00:10:01.291 bw ( KiB/s): min=24968, max=25100, per=100.00%, avg=25024.67, stdev=41.77, samples=9 00:10:01.291 iops : min=24968, max=25100, avg=25024.67, stdev=41.77, samples=9 00:10:01.291 lat (msec) : 4=0.10%, 10=99.90% 00:10:01.291 cpu : usr=6.25%, sys=16.15%, ctx=67668, majf=0, minf=32 00:10:01.291 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:01.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.291 issued rwts: total=125089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.291 00:10:01.291 Run status group 0 (all jobs): 00:10:01.291 READ: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=122MiB (128MB), run=5005-5005msec 00:10:01.291 00:10:01.291 Disk stats (read/write): 00:10:01.291 sda: ios=122175/0, merge=0/0, ticks=528733/0, in_queue=528733, util=98.15% 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:10:01.292 "tick_rate": 2100000000, 00:10:01.292 "ticks": 1111297782904, 00:10:01.292 "bdevs": [ 00:10:01.292 { 00:10:01.292 "name": "Malloc0", 00:10:01.292 "bytes_read": 768061952, 00:10:01.292 "num_read_ops": 749033, 00:10:01.292 "bytes_written": 0, 00:10:01.292 "num_write_ops": 0, 00:10:01.292 "bytes_unmapped": 0, 00:10:01.292 "num_unmap_ops": 0, 00:10:01.292 "bytes_copied": 0, 00:10:01.292 "num_copy_ops": 0, 00:10:01.292 "read_latency_ticks": 1197714819052, 00:10:01.292 "max_read_latency_ticks": 5692230, 00:10:01.292 "min_read_latency_ticks": 8596, 00:10:01.292 "write_latency_ticks": 0, 00:10:01.292 "max_write_latency_ticks": 0, 00:10:01.292 "min_write_latency_ticks": 0, 00:10:01.292 "unmap_latency_ticks": 0, 00:10:01.292 "max_unmap_latency_ticks": 0, 00:10:01.292 "min_unmap_latency_ticks": 0, 00:10:01.292 "copy_latency_ticks": 0, 00:10:01.292 "max_copy_latency_ticks": 0, 00:10:01.292 "min_copy_latency_ticks": 0, 00:10:01.292 "io_error": {} 00:10:01.292 } 00:10:01.292 ] 00:10:01.292 }' 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=749033 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=768061952 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=25017 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=25618227 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@117 -- # verify_qos_limits 25017 25000 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=25017 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=25000 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@119 -- # echo 'I/O rate limiting tests successful' 00:10:01.292 I/O rate limiting tests successful 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@122 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 --rw_mbytes_per_sec 25 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@123 -- # run_fio Malloc0 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:10:01.292 "tick_rate": 2100000000, 00:10:01.292 "ticks": 1111600261066, 00:10:01.292 "bdevs": [ 00:10:01.292 { 00:10:01.292 "name": "Malloc0", 00:10:01.292 "bytes_read": 768061952, 00:10:01.292 "num_read_ops": 749033, 00:10:01.292 "bytes_written": 0, 00:10:01.292 "num_write_ops": 0, 00:10:01.292 "bytes_unmapped": 0, 00:10:01.292 "num_unmap_ops": 0, 00:10:01.292 "bytes_copied": 0, 00:10:01.292 "num_copy_ops": 0, 00:10:01.292 "read_latency_ticks": 1197714819052, 00:10:01.292 "max_read_latency_ticks": 5692230, 00:10:01.292 "min_read_latency_ticks": 8596, 00:10:01.292 "write_latency_ticks": 0, 00:10:01.292 "max_write_latency_ticks": 0, 00:10:01.292 "min_write_latency_ticks": 0, 00:10:01.292 "unmap_latency_ticks": 0, 00:10:01.292 "max_unmap_latency_ticks": 0, 00:10:01.292 "min_unmap_latency_ticks": 0, 00:10:01.292 "copy_latency_ticks": 0, 00:10:01.292 "max_copy_latency_ticks": 0, 00:10:01.292 "min_copy_latency_ticks": 0, 00:10:01.292 "io_error": {} 00:10:01.292 } 00:10:01.292 ] 00:10:01.292 }' 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=749033 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=768061952 00:10:01.292 10:10:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:10:01.292 [global] 00:10:01.292 thread=1 00:10:01.292 invalidate=1 00:10:01.292 rw=randread 00:10:01.292 time_based=1 00:10:01.292 runtime=5 00:10:01.292 ioengine=libaio 00:10:01.292 direct=1 00:10:01.292 bs=1024 00:10:01.292 iodepth=128 00:10:01.292 norandommap=1 00:10:01.292 numjobs=1 00:10:01.292 00:10:01.292 [job0] 00:10:01.292 filename=/dev/sda 00:10:01.550 queue_depth set to 113 (sda) 00:10:01.550 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:10:01.550 fio-3.35 00:10:01.550 Starting 1 thread 00:10:06.823 00:10:06.823 job0: (groupid=0, jobs=1): err= 0: pid=68079: Thu Jul 25 10:10:39 2024 00:10:06.823 read: IOPS=25.6k, BW=25.0MiB/s (26.2MB/s)(125MiB/5005msec) 00:10:06.823 slat (usec): min=3, max=1384, avg=36.72, stdev=150.54 00:10:06.823 clat (usec): min=1779, max=8914, avg=4961.98, stdev=240.04 00:10:06.823 lat (usec): min=1796, max=8917, avg=4998.70, stdev=189.51 00:10:06.823 clat percentiles (usec): 00:10:06.823 | 1.00th=[ 4146], 5.00th=[ 4424], 10.00th=[ 4752], 20.00th=[ 4948], 00:10:06.823 | 30.00th=[ 5014], 40.00th=[ 5014], 50.00th=[ 5014], 60.00th=[ 5014], 00:10:06.823 | 70.00th=[ 5014], 80.00th=[ 5014], 90.00th=[ 5080], 95.00th=[ 5211], 00:10:06.823 | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 5800], 99.95th=[ 6783], 00:10:06.823 | 99.99th=[ 8848] 00:10:06.823 bw ( KiB/s): min=25600, max=25652, per=100.00%, avg=25636.44, stdev=18.65, samples=9 00:10:06.823 iops : min=25600, max=25652, avg=25636.44, stdev=18.65, samples=9 00:10:06.823 lat (msec) : 2=0.01%, 4=0.35%, 10=99.64% 00:10:06.823 cpu : usr=5.98%, sys=13.37%, ctx=63990, majf=0, minf=32 00:10:06.823 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:06.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.823 issued rwts: total=128113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.823 00:10:06.823 Run status group 0 (all jobs): 00:10:06.823 READ: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=125MiB (131MB), run=5005-5005msec 00:10:06.823 00:10:06.823 Disk stats (read/write): 00:10:06.823 sda: ios=125157/0, merge=0/0, ticks=530258/0, in_queue=530258, util=98.13% 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:10:06.823 "tick_rate": 2100000000, 00:10:06.823 "ticks": 1123033004504, 00:10:06.823 "bdevs": [ 00:10:06.823 { 00:10:06.823 "name": "Malloc0", 00:10:06.823 "bytes_read": 899249664, 00:10:06.823 "num_read_ops": 877146, 00:10:06.823 "bytes_written": 0, 00:10:06.823 "num_write_ops": 0, 00:10:06.823 "bytes_unmapped": 0, 00:10:06.823 "num_unmap_ops": 0, 00:10:06.823 "bytes_copied": 0, 00:10:06.823 "num_copy_ops": 0, 00:10:06.823 "read_latency_ticks": 1721893325506, 00:10:06.823 "max_read_latency_ticks": 5897968, 00:10:06.823 "min_read_latency_ticks": 8596, 00:10:06.823 "write_latency_ticks": 0, 00:10:06.823 "max_write_latency_ticks": 0, 00:10:06.823 "min_write_latency_ticks": 0, 00:10:06.823 "unmap_latency_ticks": 0, 00:10:06.823 "max_unmap_latency_ticks": 0, 00:10:06.823 "min_unmap_latency_ticks": 0, 00:10:06.823 "copy_latency_ticks": 0, 00:10:06.823 "max_copy_latency_ticks": 0, 00:10:06.823 "min_copy_latency_ticks": 0, 00:10:06.823 "io_error": {} 00:10:06.823 } 00:10:06.823 ] 00:10:06.823 }' 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=877146 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=899249664 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=25622 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=26237542 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@124 -- # verify_qos_limits 26237542 26214400 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=26237542 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=26214400 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@127 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 0 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@128 -- # run_fio Malloc0 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:10:06.823 "tick_rate": 2100000000, 00:10:06.823 "ticks": 1123310223872, 00:10:06.823 "bdevs": [ 00:10:06.823 { 00:10:06.823 "name": "Malloc0", 00:10:06.823 "bytes_read": 899249664, 00:10:06.823 "num_read_ops": 877146, 00:10:06.823 "bytes_written": 0, 00:10:06.823 "num_write_ops": 0, 00:10:06.823 "bytes_unmapped": 0, 00:10:06.823 "num_unmap_ops": 0, 00:10:06.823 "bytes_copied": 0, 00:10:06.823 "num_copy_ops": 0, 00:10:06.823 "read_latency_ticks": 1721893325506, 00:10:06.823 "max_read_latency_ticks": 5897968, 00:10:06.823 "min_read_latency_ticks": 8596, 00:10:06.823 "write_latency_ticks": 0, 00:10:06.823 "max_write_latency_ticks": 0, 00:10:06.823 "min_write_latency_ticks": 0, 00:10:06.823 "unmap_latency_ticks": 0, 00:10:06.823 "max_unmap_latency_ticks": 0, 00:10:06.823 "min_unmap_latency_ticks": 0, 00:10:06.823 "copy_latency_ticks": 0, 00:10:06.823 "max_copy_latency_ticks": 0, 00:10:06.823 "min_copy_latency_ticks": 0, 00:10:06.823 "io_error": {} 00:10:06.823 } 00:10:06.823 ] 00:10:06.823 }' 00:10:06.823 10:10:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:10:06.823 10:10:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=877146 00:10:06.824 10:10:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:10:06.824 10:10:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=899249664 00:10:06.824 10:10:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:10:07.083 [global] 00:10:07.083 thread=1 00:10:07.083 invalidate=1 00:10:07.083 rw=randread 00:10:07.083 time_based=1 00:10:07.083 runtime=5 00:10:07.083 ioengine=libaio 00:10:07.083 direct=1 00:10:07.083 bs=1024 00:10:07.083 iodepth=128 00:10:07.083 norandommap=1 00:10:07.083 numjobs=1 00:10:07.083 00:10:07.083 [job0] 00:10:07.083 filename=/dev/sda 00:10:07.083 queue_depth set to 113 (sda) 00:10:07.083 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:10:07.083 fio-3.35 00:10:07.083 Starting 1 thread 00:10:12.348 00:10:12.348 job0: (groupid=0, jobs=1): err= 0: pid=68168: Thu Jul 25 10:10:45 2024 00:10:12.348 read: IOPS=51.0k, BW=49.8MiB/s (52.2MB/s)(249MiB/5003msec) 00:10:12.348 slat (nsec): min=1939, max=409967, avg=18146.27, stdev=54086.04 00:10:12.348 clat (usec): min=1501, max=4852, avg=2489.77, stdev=115.83 00:10:12.348 lat (usec): min=1508, max=4855, avg=2507.91, stdev=103.29 00:10:12.348 clat percentiles (usec): 00:10:12.348 | 1.00th=[ 2212], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2442], 00:10:12.348 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2474], 60.00th=[ 2474], 00:10:12.348 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 2638], 95.00th=[ 2704], 00:10:12.348 | 99.00th=[ 2802], 99.50th=[ 2835], 99.90th=[ 3294], 99.95th=[ 3490], 00:10:12.348 | 99.99th=[ 4424] 00:10:12.348 bw ( KiB/s): min=50144, max=51616, per=100.00%, avg=51152.22, stdev=574.33, samples=9 00:10:12.348 iops : min=50144, max=51616, avg=51152.22, stdev=574.33, samples=9 00:10:12.348 lat (msec) : 2=0.11%, 4=99.87%, 10=0.02% 00:10:12.348 cpu : usr=7.80%, sys=17.51%, ctx=147432, majf=0, minf=32 00:10:12.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:12.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.348 issued rwts: total=255225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.348 00:10:12.348 Run status group 0 (all jobs): 00:10:12.348 READ: bw=49.8MiB/s (52.2MB/s), 49.8MiB/s-49.8MiB/s (52.2MB/s-52.2MB/s), io=249MiB (261MB), run=5003-5003msec 00:10:12.348 00:10:12.348 Disk stats (read/write): 00:10:12.348 sda: ios=249735/0, merge=0/0, ticks=533614/0, in_queue=533614, util=98.07% 00:10:12.348 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:12.348 10:10:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.348 10:10:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:12.348 10:10:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.348 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:10:12.348 "tick_rate": 2100000000, 00:10:12.348 "ticks": 1134711716186, 00:10:12.348 "bdevs": [ 00:10:12.348 { 00:10:12.348 "name": "Malloc0", 00:10:12.348 "bytes_read": 1160600064, 00:10:12.348 "num_read_ops": 1132371, 00:10:12.348 "bytes_written": 0, 00:10:12.348 "num_write_ops": 0, 00:10:12.348 "bytes_unmapped": 0, 00:10:12.348 "num_unmap_ops": 0, 00:10:12.348 "bytes_copied": 0, 00:10:12.348 "num_copy_ops": 0, 00:10:12.348 "read_latency_ticks": 1772534544386, 00:10:12.348 "max_read_latency_ticks": 5897968, 00:10:12.348 "min_read_latency_ticks": 8560, 00:10:12.348 "write_latency_ticks": 0, 00:10:12.348 "max_write_latency_ticks": 0, 00:10:12.349 "min_write_latency_ticks": 0, 00:10:12.349 "unmap_latency_ticks": 0, 00:10:12.349 "max_unmap_latency_ticks": 0, 00:10:12.349 "min_unmap_latency_ticks": 0, 00:10:12.349 "copy_latency_ticks": 0, 00:10:12.349 "max_copy_latency_ticks": 0, 00:10:12.349 "min_copy_latency_ticks": 0, 00:10:12.349 "io_error": {} 00:10:12.349 } 00:10:12.349 ] 00:10:12.349 }' 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=1132371 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=1160600064 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=51045 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=52270080 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@129 -- # '[' 52270080 -gt 26214400 ']' 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@132 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 25 --r_mbytes_per_sec 12 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@133 -- # run_fio Malloc0 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:10:12.349 "tick_rate": 2100000000, 00:10:12.349 "ticks": 1134946910440, 00:10:12.349 "bdevs": [ 00:10:12.349 { 00:10:12.349 "name": "Malloc0", 00:10:12.349 "bytes_read": 1160600064, 00:10:12.349 "num_read_ops": 1132371, 00:10:12.349 "bytes_written": 0, 00:10:12.349 "num_write_ops": 0, 00:10:12.349 "bytes_unmapped": 0, 00:10:12.349 "num_unmap_ops": 0, 00:10:12.349 "bytes_copied": 0, 00:10:12.349 "num_copy_ops": 0, 00:10:12.349 "read_latency_ticks": 1772534544386, 00:10:12.349 "max_read_latency_ticks": 5897968, 00:10:12.349 "min_read_latency_ticks": 8560, 00:10:12.349 "write_latency_ticks": 0, 00:10:12.349 "max_write_latency_ticks": 0, 00:10:12.349 "min_write_latency_ticks": 0, 00:10:12.349 "unmap_latency_ticks": 0, 00:10:12.349 "max_unmap_latency_ticks": 0, 00:10:12.349 "min_unmap_latency_ticks": 0, 00:10:12.349 "copy_latency_ticks": 0, 00:10:12.349 "max_copy_latency_ticks": 0, 00:10:12.349 "min_copy_latency_ticks": 0, 00:10:12.349 "io_error": {} 00:10:12.349 } 00:10:12.349 ] 00:10:12.349 }' 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=1132371 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=1160600064 00:10:12.349 10:10:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:10:12.606 [global] 00:10:12.606 thread=1 00:10:12.606 invalidate=1 00:10:12.606 rw=randread 00:10:12.606 time_based=1 00:10:12.606 runtime=5 00:10:12.606 ioengine=libaio 00:10:12.606 direct=1 00:10:12.606 bs=1024 00:10:12.606 iodepth=128 00:10:12.606 norandommap=1 00:10:12.606 numjobs=1 00:10:12.606 00:10:12.606 [job0] 00:10:12.606 filename=/dev/sda 00:10:12.606 queue_depth set to 113 (sda) 00:10:12.606 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:10:12.606 fio-3.35 00:10:12.606 Starting 1 thread 00:10:17.868 00:10:17.868 job0: (groupid=0, jobs=1): err= 0: pid=68253: Thu Jul 25 10:10:50 2024 00:10:17.868 read: IOPS=12.3k, BW=12.0MiB/s (12.6MB/s)(60.1MiB/5010msec) 00:10:17.868 slat (usec): min=3, max=4303, avg=77.90, stdev=229.80 00:10:17.868 clat (usec): min=2063, max=19043, avg=10336.92, stdev=529.89 00:10:17.868 lat (usec): min=2084, max=19048, avg=10414.82, stdev=544.13 00:10:17.868 clat percentiles (usec): 00:10:17.868 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10028], 20.00th=[10028], 00:10:17.868 | 30.00th=[10028], 40.00th=[10028], 50.00th=[10159], 60.00th=[10159], 00:10:17.868 | 70.00th=[10814], 80.00th=[10945], 90.00th=[10945], 95.00th=[10945], 00:10:17.868 | 99.00th=[11076], 99.50th=[11207], 99.90th=[13960], 99.95th=[16909], 00:10:17.868 | 99.99th=[18220] 00:10:17.868 bw ( KiB/s): min=12160, max=12312, per=99.99%, avg=12284.00, stdev=46.22, samples=10 00:10:17.868 iops : min=12160, max=12312, avg=12284.00, stdev=46.22, samples=10 00:10:17.868 lat (msec) : 4=0.06%, 10=7.36%, 20=92.58% 00:10:17.868 cpu : usr=4.17%, sys=9.48%, ctx=36121, majf=0, minf=32 00:10:17.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:17.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:17.868 issued rwts: total=61547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.868 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:17.868 00:10:17.868 Run status group 0 (all jobs): 00:10:17.868 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=60.1MiB (63.0MB), run=5010-5010msec 00:10:17.868 00:10:17.868 Disk stats (read/write): 00:10:17.868 sda: ios=60109/0, merge=0/0, ticks=544172/0, in_queue=544172, util=98.12% 00:10:17.868 10:10:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:17.868 10:10:50 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.868 10:10:50 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:17.868 10:10:50 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.868 10:10:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:10:17.868 "tick_rate": 2100000000, 00:10:17.868 "ticks": 1146318108362, 00:10:17.868 "bdevs": [ 00:10:17.868 { 00:10:17.868 "name": "Malloc0", 00:10:17.868 "bytes_read": 1223624192, 00:10:17.868 "num_read_ops": 1193918, 00:10:17.868 "bytes_written": 0, 00:10:17.868 "num_write_ops": 0, 00:10:17.868 "bytes_unmapped": 0, 00:10:17.868 "num_unmap_ops": 0, 00:10:17.868 "bytes_copied": 0, 00:10:17.868 "num_copy_ops": 0, 00:10:17.868 "read_latency_ticks": 2386937019630, 00:10:17.868 "max_read_latency_ticks": 12497182, 00:10:17.868 "min_read_latency_ticks": 8560, 00:10:17.868 "write_latency_ticks": 0, 00:10:17.868 "max_write_latency_ticks": 0, 00:10:17.868 "min_write_latency_ticks": 0, 00:10:17.868 "unmap_latency_ticks": 0, 00:10:17.868 "max_unmap_latency_ticks": 0, 00:10:17.868 "min_unmap_latency_ticks": 0, 00:10:17.868 "copy_latency_ticks": 0, 00:10:17.868 "max_copy_latency_ticks": 0, 00:10:17.868 "min_copy_latency_ticks": 0, 00:10:17.868 "io_error": {} 00:10:17.868 } 00:10:17.868 ] 00:10:17.868 }' 00:10:17.868 10:10:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:10:17.868 10:10:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=1193918 00:10:17.868 10:10:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=1223624192 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=12309 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=12604825 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@134 -- # verify_qos_limits 12604825 12582912 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=12604825 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=12582912 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:10:17.868 I/O bandwidth limiting tests successful 00:10:17.868 Cleaning up iSCSI connection 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@136 -- # echo 'I/O bandwidth limiting tests successful' 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@138 -- # iscsicleanup 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:10:17.868 Logging out of session [sid: 12, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:17.868 Logout of [sid: 12, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@983 -- # rm -rf 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@139 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:Target1 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@141 -- # rm -f ./local-job0-0-verify.state 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@143 -- # killprocess 67626 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@948 -- # '[' -z 67626 ']' 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@952 -- # kill -0 67626 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@953 -- # uname 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:17.868 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67626 00:10:18.126 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:18.126 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:18.126 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67626' 00:10:18.126 killing process with pid 67626 00:10:18.126 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@967 -- # kill 67626 00:10:18.126 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@972 -- # wait 67626 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@145 -- # iscsitestfini 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:10:18.383 00:10:18.383 real 0m41.664s 00:10:18.383 user 0m36.841s 00:10:18.383 sys 0m11.882s 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:18.383 ************************************ 00:10:18.383 END TEST iscsi_tgt_qos 00:10:18.383 ************************************ 00:10:18.383 10:10:51 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:10:18.383 10:10:51 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@39 -- # run_test iscsi_tgt_ip_migration /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:10:18.383 10:10:51 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:18.383 10:10:51 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.383 10:10:51 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:10:18.383 ************************************ 00:10:18.383 START TEST iscsi_tgt_ip_migration 00:10:18.383 ************************************ 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:10:18.383 * Looking for test storage... 00:10:18.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@11 -- # iscsitestinit 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@13 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@14 -- # pids=() 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@16 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:18.383 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:18.384 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:18.641 #define SPDK_CONFIG_H 00:10:18.641 #define SPDK_CONFIG_APPS 1 00:10:18.641 #define SPDK_CONFIG_ARCH native 00:10:18.641 #undef SPDK_CONFIG_ASAN 00:10:18.641 #undef SPDK_CONFIG_AVAHI 00:10:18.641 #undef SPDK_CONFIG_CET 00:10:18.641 #define SPDK_CONFIG_COVERAGE 1 00:10:18.641 #define SPDK_CONFIG_CROSS_PREFIX 00:10:18.641 #undef SPDK_CONFIG_CRYPTO 00:10:18.641 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:18.641 #undef SPDK_CONFIG_CUSTOMOCF 00:10:18.641 #undef SPDK_CONFIG_DAOS 00:10:18.641 #define SPDK_CONFIG_DAOS_DIR 00:10:18.641 #define SPDK_CONFIG_DEBUG 1 00:10:18.641 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:18.641 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:18.641 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:18.641 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:18.641 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:18.641 #undef SPDK_CONFIG_DPDK_UADK 00:10:18.641 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:18.641 #define SPDK_CONFIG_EXAMPLES 1 00:10:18.641 #undef SPDK_CONFIG_FC 00:10:18.641 #define SPDK_CONFIG_FC_PATH 00:10:18.641 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:18.641 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:18.641 #undef SPDK_CONFIG_FUSE 00:10:18.641 #undef SPDK_CONFIG_FUZZER 00:10:18.641 #define SPDK_CONFIG_FUZZER_LIB 00:10:18.641 #undef SPDK_CONFIG_GOLANG 00:10:18.641 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:18.641 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:18.641 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:18.641 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:18.641 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:18.641 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:18.641 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:18.641 #define SPDK_CONFIG_IDXD 1 00:10:18.641 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:18.641 #undef SPDK_CONFIG_IPSEC_MB 00:10:18.641 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:18.641 #define SPDK_CONFIG_ISAL 1 00:10:18.641 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:18.641 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:18.641 #define SPDK_CONFIG_LIBDIR 00:10:18.641 #undef SPDK_CONFIG_LTO 00:10:18.641 #define SPDK_CONFIG_MAX_LCORES 128 00:10:18.641 #define SPDK_CONFIG_NVME_CUSE 1 00:10:18.641 #undef SPDK_CONFIG_OCF 00:10:18.641 #define SPDK_CONFIG_OCF_PATH 00:10:18.641 #define SPDK_CONFIG_OPENSSL_PATH 00:10:18.641 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:18.641 #define SPDK_CONFIG_PGO_DIR 00:10:18.641 #undef SPDK_CONFIG_PGO_USE 00:10:18.641 #define SPDK_CONFIG_PREFIX /usr/local 00:10:18.641 #undef SPDK_CONFIG_RAID5F 00:10:18.641 #define SPDK_CONFIG_RBD 1 00:10:18.641 #define SPDK_CONFIG_RDMA 1 00:10:18.641 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:18.641 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:18.641 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:18.641 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:18.641 #define SPDK_CONFIG_SHARED 1 00:10:18.641 #undef SPDK_CONFIG_SMA 00:10:18.641 #define SPDK_CONFIG_TESTS 1 00:10:18.641 #undef SPDK_CONFIG_TSAN 00:10:18.641 #define SPDK_CONFIG_UBLK 1 00:10:18.641 #define SPDK_CONFIG_UBSAN 1 00:10:18.641 #undef SPDK_CONFIG_UNIT_TESTS 00:10:18.641 #undef SPDK_CONFIG_URING 00:10:18.641 #define SPDK_CONFIG_URING_PATH 00:10:18.641 #undef SPDK_CONFIG_URING_ZNS 00:10:18.641 #undef SPDK_CONFIG_USDT 00:10:18.641 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:18.641 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:18.641 #undef SPDK_CONFIG_VFIO_USER 00:10:18.641 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:18.641 #define SPDK_CONFIG_VHOST 1 00:10:18.641 #define SPDK_CONFIG_VIRTIO 1 00:10:18.641 #undef SPDK_CONFIG_VTUNE 00:10:18.641 #define SPDK_CONFIG_VTUNE_DIR 00:10:18.641 #define SPDK_CONFIG_WERROR 1 00:10:18.641 #define SPDK_CONFIG_WPDK_DIR 00:10:18.641 #undef SPDK_CONFIG_XNVME 00:10:18.641 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@17 -- # NETMASK=127.0.0.0/24 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@18 -- # MIGRATION_ADDRESS=127.0.0.2 00:10:18.641 Running ip migration tests 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@56 -- # echo 'Running ip migration tests' 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@57 -- # timing_enter start_iscsi_tgt_0 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@58 -- # rpc_first_addr=/var/tmp/spdk0.sock 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@59 -- # iscsi_tgt_start /var/tmp/spdk0.sock 1 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=68385 00:10:18.641 Process pid: 68385 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 68385' 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -m 1 --wait-for-rpc 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 68385 /var/tmp/spdk0.sock 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@829 -- # '[' -z 68385 ']' 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk0.sock 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.641 10:10:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:18.641 [2024-07-25 10:10:51.730504] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:10:18.642 [2024-07-25 10:10:51.730623] [ DPDK EAL parameters: iscsi --no-shconf -c 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68385 ] 00:10:18.642 [2024-07-25 10:10:51.875717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.898 [2024-07-25 10:10:51.970201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.463 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:19.463 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@862 -- # return 0 00:10:19.463 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_set_options -o 30 -a 64 00:10:19.463 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.463 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:19.463 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.463 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk0.sock framework_start_init 00:10:19.463 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.463 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:19.720 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.720 iscsi_tgt is listening. Running tests... 00:10:19.720 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:10:19.720 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:10:19.720 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.720 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:19.720 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.720 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk0.sock bdev_malloc_create 64 512 00:10:19.720 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.720 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:19.720 Malloc0 00:10:19.720 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.720 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:19.720 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@60 -- # timing_exit start_iscsi_tgt_0 00:10:19.720 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:19.720 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:19.720 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@62 -- # timing_enter start_iscsi_tgt_1 00:10:19.720 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:19.721 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:19.721 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@63 -- # rpc_second_addr=/var/tmp/spdk1.sock 00:10:19.721 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@64 -- # iscsi_tgt_start /var/tmp/spdk1.sock 2 00:10:19.721 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=68418 00:10:19.721 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 68418' 00:10:19.721 Process pid: 68418 00:10:19.721 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:10:19.721 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -m 2 --wait-for-rpc 00:10:19.721 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:10:19.721 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 68418 /var/tmp/spdk1.sock 00:10:19.721 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@829 -- # '[' -z 68418 ']' 00:10:19.721 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk1.sock 00:10:19.721 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:19.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:10:19.721 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:10:19.721 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:19.721 10:10:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:19.721 [2024-07-25 10:10:52.950748] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:10:19.721 [2024-07-25 10:10:52.950864] [ DPDK EAL parameters: iscsi --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68418 ] 00:10:19.978 [2024-07-25 10:10:53.094267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.978 [2024-07-25 10:10:53.197685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.543 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:20.543 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@862 -- # return 0 00:10:20.543 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_set_options -o 30 -a 64 00:10:20.543 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.543 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:20.800 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.800 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk1.sock framework_start_init 00:10:20.800 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.800 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:20.800 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.800 iscsi_tgt is listening. Running tests... 00:10:20.800 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:10:20.800 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:10:20.800 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.800 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:20.800 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.800 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk1.sock bdev_malloc_create 64 512 00:10:20.800 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.800 10:10:53 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:20.800 Malloc0 00:10:20.800 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.800 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:20.800 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@65 -- # timing_exit start_iscsi_tgt_1 00:10:20.800 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:20.800 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:21.057 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@67 -- # rpc_add_target_node /var/tmp/spdk0.sock 00:10:21.057 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:10:21.057 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:10:21.057 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.057 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:21.057 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.057 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:10:21.057 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.057 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:21.057 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.057 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:10:21.057 10:10:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@69 -- # sleep 1 00:10:22.057 10:10:55 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@70 -- # iscsiadm -m discovery -t sendtargets -p 127.0.0.2:3260 00:10:22.057 127.0.0.2:3260,1 iqn.2016-06.io.spdk:target1 00:10:22.057 10:10:55 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@71 -- # sleep 1 00:10:22.989 10:10:56 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@72 -- # iscsiadm -m node --login -p 127.0.0.2:3260 00:10:22.989 Logging in to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:10:22.989 Login to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:10:22.989 10:10:56 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@73 -- # waitforiscsidevices 1 00:10:22.989 10:10:56 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@116 -- # local num=1 00:10:22.989 10:10:56 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:22.989 10:10:56 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:22.989 10:10:56 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:22.989 10:10:56 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:22.989 [2024-07-25 10:10:56.167767] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:22.989 10:10:56 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # n=1 00:10:22.989 10:10:56 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:10:22.989 10:10:56 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@123 -- # return 0 00:10:22.989 10:10:56 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@77 -- # fiopid=68498 00:10:22.989 10:10:56 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@78 -- # sleep 3 00:10:22.989 10:10:56 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 32 -t randrw -r 12 00:10:22.989 [global] 00:10:22.989 thread=1 00:10:22.989 invalidate=1 00:10:22.989 rw=randrw 00:10:22.989 time_based=1 00:10:22.989 runtime=12 00:10:22.989 ioengine=libaio 00:10:22.989 direct=1 00:10:22.989 bs=4096 00:10:22.989 iodepth=32 00:10:22.989 norandommap=1 00:10:22.989 numjobs=1 00:10:22.989 00:10:22.989 [job0] 00:10:22.989 filename=/dev/sda 00:10:22.989 queue_depth set to 113 (sda) 00:10:23.246 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 00:10:23.246 fio-3.35 00:10:23.246 Starting 1 thread 00:10:23.246 [2024-07-25 10:10:56.361321] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:26.530 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@80 -- # rpc_cmd -s /var/tmp/spdk0.sock spdk_kill_instance SIGTERM 00:10:26.531 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.531 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:26.531 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.531 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@81 -- # wait 68385 00:10:26.531 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@83 -- # rpc_add_target_node /var/tmp/spdk1.sock 00:10:26.531 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:10:26.531 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:10:26.531 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.531 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:26.531 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.531 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:10:26.531 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.531 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:26.531 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.531 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:10:26.531 10:10:59 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@85 -- # wait 68498 00:10:36.528 [2024-07-25 10:11:08.480080] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:36.528 00:10:36.528 job0: (groupid=0, jobs=1): err= 0: pid=68530: Thu Jul 25 10:11:08 2024 00:10:36.528 read: IOPS=18.5k, BW=72.1MiB/s (75.6MB/s)(866MiB/12001msec) 00:10:36.528 slat (usec): min=2, max=101, avg= 4.51, stdev= 2.77 00:10:36.528 clat (usec): min=334, max=2003.6k, avg=880.11, stdev=17541.07 00:10:36.528 lat (usec): min=342, max=2003.6k, avg=884.62, stdev=17541.11 00:10:36.528 clat percentiles (usec): 00:10:36.528 | 1.00th=[ 469], 5.00th=[ 523], 10.00th=[ 578], 20.00th=[ 652], 00:10:36.528 | 30.00th=[ 685], 40.00th=[ 701], 50.00th=[ 725], 60.00th=[ 750], 00:10:36.528 | 70.00th=[ 775], 80.00th=[ 807], 90.00th=[ 873], 95.00th=[ 930], 00:10:36.528 | 99.00th=[ 996], 99.50th=[ 1020], 99.90th=[ 1074], 99.95th=[ 1090], 00:10:36.528 | 99.99th=[ 1500] 00:10:36.528 bw ( KiB/s): min=34960, max=90368, per=100.00%, avg=84373.20, stdev=14194.50, samples=20 00:10:36.528 iops : min= 8740, max=22592, avg=21093.30, stdev=3548.63, samples=20 00:10:36.528 write: IOPS=18.5k, BW=72.1MiB/s (75.6MB/s)(866MiB/12001msec); 0 zone resets 00:10:36.528 slat (usec): min=2, max=104, avg= 4.48, stdev= 2.85 00:10:36.528 clat (usec): min=200, max=2003.4k, avg=842.98, stdev=16476.67 00:10:36.528 lat (usec): min=204, max=2003.4k, avg=847.46, stdev=16476.70 00:10:36.528 clat percentiles (usec): 00:10:36.528 | 1.00th=[ 445], 5.00th=[ 515], 10.00th=[ 570], 20.00th=[ 627], 00:10:36.528 | 30.00th=[ 652], 40.00th=[ 676], 50.00th=[ 701], 60.00th=[ 725], 00:10:36.528 | 70.00th=[ 758], 80.00th=[ 799], 90.00th=[ 865], 95.00th=[ 914], 00:10:36.528 | 99.00th=[ 971], 99.50th=[ 996], 99.90th=[ 1057], 99.95th=[ 1074], 00:10:36.528 | 99.99th=[ 1434] 00:10:36.528 bw ( KiB/s): min=36200, max=90488, per=100.00%, avg=84336.80, stdev=14121.85, samples=20 00:10:36.528 iops : min= 9050, max=22622, avg=21084.20, stdev=3530.46, samples=20 00:10:36.528 lat (usec) : 250=0.01%, 500=3.63%, 750=61.52%, 1000=34.21% 00:10:36.528 lat (msec) : 2=0.63%, >=2000=0.01% 00:10:36.528 cpu : usr=7.58%, sys=15.77%, ctx=36474, majf=0, minf=1 00:10:36.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% 00:10:36.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:36.528 issued rwts: total=221602,221583,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.528 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:36.528 00:10:36.528 Run status group 0 (all jobs): 00:10:36.528 READ: bw=72.1MiB/s (75.6MB/s), 72.1MiB/s-72.1MiB/s (75.6MB/s-75.6MB/s), io=866MiB (908MB), run=12001-12001msec 00:10:36.528 WRITE: bw=72.1MiB/s (75.6MB/s), 72.1MiB/s-72.1MiB/s (75.6MB/s-75.6MB/s), io=866MiB (908MB), run=12001-12001msec 00:10:36.528 00:10:36.528 Disk stats (read/write): 00:10:36.528 sda: ios=219138/219055, merge=0/0, ticks=178872/176098, in_queue=354971, util=99.36% 00:10:36.528 Cleaning up iSCSI connection 00:10:36.528 10:11:08 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@87 -- # trap - SIGINT SIGTERM EXIT 00:10:36.528 10:11:08 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@89 -- # iscsicleanup 00:10:36.528 10:11:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:10:36.528 10:11:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:10:36.528 Logging out of session [sid: 13, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:10:36.528 Logout of [sid: 13, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:10:36.528 10:11:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:10:36.529 10:11:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@983 -- # rm -rf 00:10:36.529 10:11:08 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@91 -- # rpc_cmd -s /var/tmp/spdk1.sock spdk_kill_instance SIGTERM 00:10:36.529 10:11:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.529 10:11:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:36.529 10:11:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.529 10:11:08 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@92 -- # wait 68418 00:10:36.529 10:11:08 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@93 -- # iscsitestfini 00:10:36.529 10:11:08 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:10:36.529 00:10:36.529 real 0m17.409s 00:10:36.529 user 0m22.091s 00:10:36.529 sys 0m4.581s 00:10:36.529 10:11:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.529 10:11:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:36.529 ************************************ 00:10:36.529 END TEST iscsi_tgt_ip_migration 00:10:36.529 ************************************ 00:10:36.529 10:11:08 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:10:36.529 10:11:08 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@40 -- # run_test iscsi_tgt_trace_record /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:10:36.529 10:11:08 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:36.529 10:11:08 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.529 10:11:08 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:10:36.529 ************************************ 00:10:36.529 START TEST iscsi_tgt_trace_record 00:10:36.529 ************************************ 00:10:36.529 10:11:08 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:10:36.529 * Looking for test storage... 00:10:36.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@11 -- # iscsitestinit 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@13 -- # TRACE_TMP_FOLDER=./tmp-trace 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@14 -- # TRACE_RECORD_OUTPUT=./tmp-trace/record.trace 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@15 -- # TRACE_RECORD_NOTICE_LOG=./tmp-trace/record.notice 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@16 -- # TRACE_TOOL_LOG=./tmp-trace/trace.log 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@22 -- # '[' -z 10.0.0.1 ']' 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@27 -- # '[' -z 10.0.0.2 ']' 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@32 -- # NUM_TRACE_ENTRIES=4096 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@33 -- # MALLOC_BDEV_SIZE=64 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@34 -- # MALLOC_BLOCK_SIZE=4096 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@36 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@37 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@39 -- # timing_enter start_iscsi_tgt 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:10:36.529 start iscsi_tgt with trace enabled 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@41 -- # echo 'start iscsi_tgt with trace enabled' 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@43 -- # iscsi_pid=68722 00:10:36.529 Process pid: 68722 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@44 -- # echo 'Process pid: 68722' 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@46 -- # trap 'killprocess $iscsi_pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@48 -- # waitforlisten 68722 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@829 -- # '[' -z 68722 ']' 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@42 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xf --num-trace-entries 4096 --tpoint-group all 00:10:36.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:36.529 10:11:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:10:36.529 [2024-07-25 10:11:09.189605] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:10:36.529 [2024-07-25 10:11:09.190252] [ DPDK EAL parameters: iscsi --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68722 ] 00:10:36.529 [2024-07-25 10:11:09.334929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.529 [2024-07-25 10:11:09.445940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask all specified. 00:10:36.529 [2024-07-25 10:11:09.445996] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s iscsi -p 68722' to capture a snapshot of events at runtime. 00:10:36.529 [2024-07-25 10:11:09.446012] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.529 [2024-07-25 10:11:09.446025] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.529 [2024-07-25 10:11:09.446044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/iscsi_trace.pid68722 for offline analysis/debug. 00:10:36.529 [2024-07-25 10:11:09.446248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.529 [2024-07-25 10:11:09.449450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.529 [2024-07-25 10:11:09.449539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.529 [2024-07-25 10:11:09.449542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@862 -- # return 0 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@50 -- # echo 'iscsi_tgt is listening. Running tests...' 00:10:37.099 iscsi_tgt is listening. Running tests... 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@52 -- # timing_exit start_iscsi_tgt 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@54 -- # mkdir -p ./tmp-trace 00:10:37.099 Trace record pid: 68757 00:10:37.099 Create bdevs and target nodes 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@56 -- # record_pid=68757 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@57 -- # echo 'Trace record pid: 68757' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace_record -s iscsi -p 68722 -f ./tmp-trace/record.trace -q 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@59 -- # RPCS= 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@60 -- # RPCS+='iscsi_create_portal_group 1 10.0.0.1:3260\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@61 -- # RPCS+='iscsi_create_initiator_group 2 ANY 10.0.0.2/32\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@63 -- # echo 'Create bdevs and target nodes' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@64 -- # CONNECTION_NUMBER=15 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # seq 0 15 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc0\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target0 Target0_alias Malloc0:0 1:2 256 -d\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc1\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target1 Target1_alias Malloc1:0 1:2 256 -d\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc2\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target2 Target2_alias Malloc2:0 1:2 256 -d\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc3\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target3 Target3_alias Malloc3:0 1:2 256 -d\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc4\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target4 Target4_alias Malloc4:0 1:2 256 -d\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc5\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target5 Target5_alias Malloc5:0 1:2 256 -d\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc6\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target6 Target6_alias Malloc6:0 1:2 256 -d\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc7\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target7 Target7_alias Malloc7:0 1:2 256 -d\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc8\n' 00:10:37.099 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target8 Target8_alias Malloc8:0 1:2 256 -d\n' 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc9\n' 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target9 Target9_alias Malloc9:0 1:2 256 -d\n' 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc10\n' 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target10 Target10_alias Malloc10:0 1:2 256 -d\n' 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc11\n' 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target11 Target11_alias Malloc11:0 1:2 256 -d\n' 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc12\n' 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target12 Target12_alias Malloc12:0 1:2 256 -d\n' 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc13\n' 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target13 Target13_alias Malloc13:0 1:2 256 -d\n' 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc14\n' 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target14 Target14_alias Malloc14:0 1:2 256 -d\n' 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc15\n' 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target15 Target15_alias Malloc15:0 1:2 256 -d\n' 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # echo -e iscsi_create_portal_group 1 '10.0.0.1:3260\niscsi_create_initiator_group' 2 ANY '10.0.0.2/32\nbdev_malloc_create' 64 4096 -b 'Malloc0\niscsi_create_target_node' Target0 Target0_alias Malloc0:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc1\niscsi_create_target_node' Target1 Target1_alias Malloc1:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc2\niscsi_create_target_node' Target2 Target2_alias Malloc2:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc3\niscsi_create_target_node' Target3 Target3_alias Malloc3:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc4\niscsi_create_target_node' Target4 Target4_alias Malloc4:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc5\niscsi_create_target_node' Target5 Target5_alias Malloc5:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc6\niscsi_create_target_node' Target6 Target6_alias Malloc6:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc7\niscsi_create_target_node' Target7 Target7_alias Malloc7:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc8\niscsi_create_target_node' Target8 Target8_alias Malloc8:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc9\niscsi_create_target_node' Target9 Target9_alias Malloc9:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc10\niscsi_create_target_node' Target10 Target10_alias Malloc10:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc11\niscsi_create_target_node' Target11 Target11_alias Malloc11:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc12\niscsi_create_target_node' Target12 Target12_alias Malloc12:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc13\niscsi_create_target_node' Target13 Target13_alias Malloc13:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc14\niscsi_create_target_node' Target14 Target14_alias Malloc14:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc15\niscsi_create_target_node' Target15 Target15_alias Malloc15:0 1:2 256 '-d\n' 00:10:37.100 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:37.667 Malloc0 00:10:37.667 Malloc1 00:10:37.667 Malloc2 00:10:37.667 Malloc3 00:10:37.667 Malloc4 00:10:37.667 Malloc5 00:10:37.667 Malloc6 00:10:37.667 Malloc7 00:10:37.667 Malloc8 00:10:37.667 Malloc9 00:10:37.667 Malloc10 00:10:37.667 Malloc11 00:10:37.667 Malloc12 00:10:37.667 Malloc13 00:10:37.667 Malloc14 00:10:37.667 Malloc15 00:10:37.667 10:11:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@71 -- # sleep 1 00:10:38.601 10:11:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@73 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:10:38.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target0 00:10:38.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:10:38.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:10:38.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:10:38.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:10:38.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:10:38.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:10:38.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:10:38.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:10:38.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:10:38.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:10:38.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:10:38.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:10:38.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:10:38.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:10:38.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:10:38.601 10:11:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@74 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:10:38.860 [2024-07-25 10:11:11.867045] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:38.860 [2024-07-25 10:11:11.882122] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:38.860 [2024-07-25 10:11:11.902619] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:38.860 [2024-07-25 10:11:11.920937] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:38.860 [2024-07-25 10:11:11.963772] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:38.860 [2024-07-25 10:11:11.967001] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:38.860 [2024-07-25 10:11:12.010957] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:38.860 [2024-07-25 10:11:12.038190] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:38.860 [2024-07-25 10:11:12.062014] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:38.860 [2024-07-25 10:11:12.084163] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:38.860 [2024-07-25 10:11:12.105754] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:38.860 [2024-07-25 10:11:12.113535] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.118 [2024-07-25 10:11:12.158007] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.118 [2024-07-25 10:11:12.167510] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.118 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:10:39.118 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:39.118 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:10:39.118 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:10:39.118 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:10:39.118 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:10:39.118 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:10:39.118 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:10:39.118 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:10:39.118 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:10:39.118 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:10:39.118 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:10:39.119 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:10:39.119 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:10:39.119 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:10:39.119 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:10:39.119 Login to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:10:39.119 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:39.119 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:10:39.119 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:10:39.119 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:10:39.119 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:10:39.119 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:10:39.119 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:10:39.119 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:10:39.119 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:10:39.119 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:10:39.119 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:10:39.119 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:10:39.119 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:10:39.119 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:10:39.119 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:10:39.119 10:11:12 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@75 -- # waitforiscsidevices 16 00:10:39.119 10:11:12 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@116 -- # local num=16 00:10:39.119 10:11:12 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:39.119 10:11:12 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:39.119 10:11:12 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:39.119 [2024-07-25 10:11:12.201546] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.119 10:11:12 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:39.119 [2024-07-25 10:11:12.210578] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.119 10:11:12 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # n=16 00:10:39.119 10:11:12 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@120 -- # '[' 16 -ne 16 ']' 00:10:39.119 10:11:12 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@123 -- # return 0 00:10:39.119 Running FIO 00:10:39.119 10:11:12 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@77 -- # trap 'iscsicleanup; killprocess $iscsi_pid; killprocess $record_pid; delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:39.119 10:11:12 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@79 -- # echo 'Running FIO' 00:10:39.119 10:11:12 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 00:10:39.119 [global] 00:10:39.119 thread=1 00:10:39.119 invalidate=1 00:10:39.119 rw=randrw 00:10:39.119 time_based=1 00:10:39.119 runtime=1 00:10:39.119 ioengine=libaio 00:10:39.119 direct=1 00:10:39.119 bs=131072 00:10:39.119 iodepth=32 00:10:39.119 norandommap=1 00:10:39.119 numjobs=1 00:10:39.119 00:10:39.119 [job0] 00:10:39.119 filename=/dev/sda 00:10:39.119 [job1] 00:10:39.119 filename=/dev/sdb 00:10:39.119 [job2] 00:10:39.119 filename=/dev/sdc 00:10:39.119 [job3] 00:10:39.119 filename=/dev/sdd 00:10:39.119 [job4] 00:10:39.119 filename=/dev/sde 00:10:39.119 [job5] 00:10:39.119 filename=/dev/sdf 00:10:39.119 [job6] 00:10:39.119 filename=/dev/sdg 00:10:39.119 [job7] 00:10:39.119 filename=/dev/sdh 00:10:39.119 [job8] 00:10:39.119 filename=/dev/sdi 00:10:39.119 [job9] 00:10:39.119 filename=/dev/sdj 00:10:39.119 [job10] 00:10:39.119 filename=/dev/sdk 00:10:39.119 [job11] 00:10:39.119 filename=/dev/sdl 00:10:39.119 [job12] 00:10:39.119 filename=/dev/sdm 00:10:39.119 [job13] 00:10:39.119 filename=/dev/sdn 00:10:39.119 [job14] 00:10:39.119 filename=/dev/sdo 00:10:39.119 [job15] 00:10:39.119 filename=/dev/sdp 00:10:39.377 queue_depth set to 113 (sda) 00:10:39.377 queue_depth set to 113 (sdb) 00:10:39.377 queue_depth set to 113 (sdc) 00:10:39.377 queue_depth set to 113 (sdd) 00:10:39.377 queue_depth set to 113 (sde) 00:10:39.377 queue_depth set to 113 (sdf) 00:10:39.636 queue_depth set to 113 (sdg) 00:10:39.636 queue_depth set to 113 (sdh) 00:10:39.636 queue_depth set to 113 (sdi) 00:10:39.636 queue_depth set to 113 (sdj) 00:10:39.636 queue_depth set to 113 (sdk) 00:10:39.636 queue_depth set to 113 (sdl) 00:10:39.636 queue_depth set to 113 (sdm) 00:10:39.636 queue_depth set to 113 (sdn) 00:10:39.636 queue_depth set to 113 (sdo) 00:10:39.636 queue_depth set to 113 (sdp) 00:10:39.895 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:39.895 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:39.895 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:39.895 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:39.895 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:39.895 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:39.895 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:39.895 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:39.895 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:39.895 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:39.895 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:39.895 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:39.895 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:39.895 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:39.895 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:39.895 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:39.895 fio-3.35 00:10:39.895 Starting 16 threads 00:10:39.895 [2024-07-25 10:11:12.976979] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.895 [2024-07-25 10:11:12.981120] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.895 [2024-07-25 10:11:12.984967] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.895 [2024-07-25 10:11:12.989132] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.895 [2024-07-25 10:11:12.991232] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.895 [2024-07-25 10:11:12.993119] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.895 [2024-07-25 10:11:12.995331] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.895 [2024-07-25 10:11:12.997949] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.895 [2024-07-25 10:11:13.000086] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.895 [2024-07-25 10:11:13.002016] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.895 [2024-07-25 10:11:13.004382] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.895 [2024-07-25 10:11:13.007066] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.895 [2024-07-25 10:11:13.009200] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.895 [2024-07-25 10:11:13.011219] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.895 [2024-07-25 10:11:13.013439] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.895 [2024-07-25 10:11:13.015830] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 [2024-07-25 10:11:14.329217] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 [2024-07-25 10:11:14.332482] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 [2024-07-25 10:11:14.334342] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 [2024-07-25 10:11:14.336287] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 [2024-07-25 10:11:14.338219] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 [2024-07-25 10:11:14.340118] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 [2024-07-25 10:11:14.342056] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 [2024-07-25 10:11:14.344005] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 [2024-07-25 10:11:14.345944] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 [2024-07-25 10:11:14.348355] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 [2024-07-25 10:11:14.350575] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 [2024-07-25 10:11:14.353216] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 [2024-07-25 10:11:14.355277] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 [2024-07-25 10:11:14.357217] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 [2024-07-25 10:11:14.359923] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 [2024-07-25 10:11:14.362615] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.272 00:10:41.272 job0: (groupid=0, jobs=1): err= 0: pid=69129: Thu Jul 25 10:11:14 2024 00:10:41.272 read: IOPS=564, BW=70.5MiB/s (73.9MB/s)(72.1MiB/1023msec) 00:10:41.272 slat (usec): min=6, max=747, avg=21.41, stdev=50.17 00:10:41.272 clat (usec): min=5146, max=25539, avg=7316.91, stdev=2224.44 00:10:41.272 lat (usec): min=5325, max=25553, avg=7338.32, stdev=2221.42 00:10:41.272 clat percentiles (usec): 00:10:41.272 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6456], 00:10:41.272 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:10:41.272 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 8225], 95.00th=[ 9241], 00:10:41.272 | 99.00th=[17957], 99.50th=[23462], 99.90th=[25560], 99.95th=[25560], 00:10:41.272 | 99.99th=[25560] 00:10:41.272 bw ( KiB/s): min=73106, max=73325, per=6.18%, avg=73215.50, stdev=154.86, samples=2 00:10:41.272 iops : min= 571, max= 572, avg=571.50, stdev= 0.71, samples=2 00:10:41.272 write: IOPS=604, BW=75.5MiB/s (79.2MB/s)(77.2MiB/1023msec); 0 zone resets 00:10:41.272 slat (usec): min=9, max=1009, avg=29.36, stdev=62.93 00:10:41.272 clat (usec): min=9498, max=67879, avg=45995.60, stdev=5776.96 00:10:41.272 lat (usec): min=9511, max=67895, avg=46024.96, stdev=5778.12 00:10:41.272 clat percentiles (usec): 00:10:41.272 | 1.00th=[19792], 5.00th=[37487], 10.00th=[41157], 20.00th=[43779], 00:10:41.272 | 30.00th=[44827], 40.00th=[45876], 50.00th=[46924], 60.00th=[47973], 00:10:41.272 | 70.00th=[48497], 80.00th=[49546], 90.00th=[50594], 95.00th=[52167], 00:10:41.272 | 99.00th=[55313], 99.50th=[56886], 99.90th=[67634], 99.95th=[67634], 00:10:41.272 | 99.99th=[67634] 00:10:41.272 bw ( KiB/s): min=72814, max=78492, per=6.24%, avg=75653.00, stdev=4014.95, samples=2 00:10:41.272 iops : min= 568, max= 613, avg=590.50, stdev=31.82, samples=2 00:10:41.272 lat (msec) : 10=46.36%, 20=2.09%, 50=44.35%, 100=7.20% 00:10:41.272 cpu : usr=0.78%, sys=1.96%, ctx=1115, majf=0, minf=1 00:10:41.272 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.4%, >=64=0.0% 00:10:41.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.272 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:41.272 issued rwts: total=577,618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.272 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:41.272 job1: (groupid=0, jobs=1): err= 0: pid=69130: Thu Jul 25 10:11:14 2024 00:10:41.272 read: IOPS=559, BW=69.9MiB/s (73.3MB/s)(71.4MiB/1021msec) 00:10:41.272 slat (usec): min=6, max=368, avg=18.18, stdev=32.49 00:10:41.272 clat (usec): min=869, max=29762, avg=7639.36, stdev=3569.10 00:10:41.272 lat (usec): min=889, max=29772, avg=7657.54, stdev=3567.25 00:10:41.272 clat percentiles (usec): 00:10:41.272 | 1.00th=[ 2671], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 00:10:41.272 | 30.00th=[ 6521], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:10:41.272 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 8160], 95.00th=[12125], 00:10:41.272 | 99.00th=[25822], 99.50th=[28705], 99.90th=[29754], 99.95th=[29754], 00:10:41.272 | 99.99th=[29754] 00:10:41.272 bw ( KiB/s): min=69771, max=74388, per=6.09%, avg=72079.50, stdev=3264.71, samples=2 00:10:41.272 iops : min= 545, max= 581, avg=563.00, stdev=25.46, samples=2 00:10:41.272 write: IOPS=604, BW=75.5MiB/s (79.2MB/s)(77.1MiB/1021msec); 0 zone resets 00:10:41.272 slat (usec): min=10, max=466, avg=24.73, stdev=38.13 00:10:41.272 clat (usec): min=6375, max=59905, avg=45776.96, stdev=6506.60 00:10:41.272 lat (usec): min=6393, max=59925, avg=45801.69, stdev=6507.34 00:10:41.272 clat percentiles (usec): 00:10:41.272 | 1.00th=[14484], 5.00th=[37487], 10.00th=[42206], 20.00th=[43779], 00:10:41.272 | 30.00th=[44827], 40.00th=[45876], 50.00th=[46400], 60.00th=[47449], 00:10:41.272 | 70.00th=[48497], 80.00th=[49546], 90.00th=[51119], 95.00th=[52691], 00:10:41.272 | 99.00th=[55313], 99.50th=[56886], 99.90th=[60031], 99.95th=[60031], 00:10:41.272 | 99.99th=[60031] 00:10:41.272 bw ( KiB/s): min=74388, max=77723, per=6.27%, avg=76055.50, stdev=2358.20, samples=2 00:10:41.272 iops : min= 581, max= 607, avg=594.00, stdev=18.38, samples=2 00:10:41.272 lat (usec) : 1000=0.08% 00:10:41.272 lat (msec) : 2=0.08%, 4=0.42%, 10=44.44%, 20=2.44%, 50=44.70% 00:10:41.272 lat (msec) : 100=7.83% 00:10:41.272 cpu : usr=1.18%, sys=1.27%, ctx=1152, majf=0, minf=1 00:10:41.272 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.4%, >=64=0.0% 00:10:41.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.272 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:41.272 issued rwts: total=571,617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.272 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:41.272 job2: (groupid=0, jobs=1): err= 0: pid=69131: Thu Jul 25 10:11:14 2024 00:10:41.272 read: IOPS=589, BW=73.7MiB/s (77.2MB/s)(75.1MiB/1020msec) 00:10:41.272 slat (usec): min=6, max=666, avg=18.75, stdev=41.11 00:10:41.272 clat (usec): min=5094, max=20317, avg=7455.94, stdev=1683.30 00:10:41.272 lat (usec): min=5104, max=20334, avg=7474.69, stdev=1679.18 00:10:41.272 clat percentiles (usec): 00:10:41.272 | 1.00th=[ 5604], 5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 6652], 00:10:41.272 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7308], 00:10:41.272 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8225], 95.00th=[10290], 00:10:41.272 | 99.00th=[15008], 99.50th=[16319], 99.90th=[20317], 99.95th=[20317], 00:10:41.272 | 99.99th=[20317] 00:10:41.272 bw ( KiB/s): min=76032, max=77413, per=6.48%, avg=76722.50, stdev=976.51, samples=2 00:10:41.272 iops : min= 594, max= 604, avg=599.00, stdev= 7.07, samples=2 00:10:41.272 write: IOPS=584, BW=73.0MiB/s (76.6MB/s)(74.5MiB/1020msec); 0 zone resets 00:10:41.272 slat (usec): min=8, max=541, avg=28.01, stdev=49.51 00:10:41.272 clat (usec): min=16122, max=59882, avg=47148.40, stdev=5408.35 00:10:41.272 lat (usec): min=16140, max=59897, avg=47176.42, stdev=5409.75 00:10:41.272 clat percentiles (usec): 00:10:41.272 | 1.00th=[23200], 5.00th=[39584], 10.00th=[42206], 20.00th=[44303], 00:10:41.272 | 30.00th=[45876], 40.00th=[46924], 50.00th=[47973], 60.00th=[48497], 00:10:41.272 | 70.00th=[50070], 80.00th=[50594], 90.00th=[52167], 95.00th=[53216], 00:10:41.272 | 99.00th=[55837], 99.50th=[58459], 99.90th=[60031], 99.95th=[60031], 00:10:41.272 | 99.99th=[60031] 00:10:41.272 bw ( KiB/s): min=70144, max=74602, per=5.97%, avg=72373.00, stdev=3152.28, samples=2 00:10:41.273 iops : min= 548, max= 582, avg=565.00, stdev=24.04, samples=2 00:10:41.273 lat (msec) : 10=47.28%, 20=2.92%, 50=35.92%, 100=13.87% 00:10:41.273 cpu : usr=1.28%, sys=1.28%, ctx=1162, majf=0, minf=1 00:10:41.273 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.4%, >=64=0.0% 00:10:41.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.273 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:41.273 issued rwts: total=601,596,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.273 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:41.273 job3: (groupid=0, jobs=1): err= 0: pid=69132: Thu Jul 25 10:11:14 2024 00:10:41.273 read: IOPS=591, BW=73.9MiB/s (77.5MB/s)(75.0MiB/1015msec) 00:10:41.273 slat (usec): min=8, max=806, avg=20.60, stdev=44.83 00:10:41.273 clat (usec): min=1733, max=21218, avg=7077.34, stdev=1979.58 00:10:41.273 lat (usec): min=1744, max=21232, avg=7097.94, stdev=1978.19 00:10:41.273 clat percentiles (usec): 00:10:41.273 | 1.00th=[ 3687], 5.00th=[ 5604], 10.00th=[ 5932], 20.00th=[ 6194], 00:10:41.273 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6915], 00:10:41.273 | 70.00th=[ 7111], 80.00th=[ 7439], 90.00th=[ 8029], 95.00th=[ 9896], 00:10:41.273 | 99.00th=[18744], 99.50th=[19530], 99.90th=[21103], 99.95th=[21103], 00:10:41.273 | 99.99th=[21103] 00:10:41.273 bw ( KiB/s): min=75008, max=76952, per=6.42%, avg=75980.00, stdev=1374.62, samples=2 00:10:41.273 iops : min= 586, max= 601, avg=593.50, stdev=10.61, samples=2 00:10:41.273 write: IOPS=623, BW=78.0MiB/s (81.7MB/s)(79.1MiB/1015msec); 0 zone resets 00:10:41.273 slat (usec): min=9, max=454, avg=27.33, stdev=40.13 00:10:41.273 clat (usec): min=6208, max=68588, avg=44478.18, stdev=6983.91 00:10:41.273 lat (usec): min=6234, max=68622, avg=44505.51, stdev=6986.55 00:10:41.273 clat percentiles (usec): 00:10:41.273 | 1.00th=[17695], 5.00th=[30540], 10.00th=[38011], 20.00th=[42730], 00:10:41.273 | 30.00th=[43779], 40.00th=[44303], 50.00th=[45351], 60.00th=[46400], 00:10:41.273 | 70.00th=[46924], 80.00th=[48497], 90.00th=[50594], 95.00th=[52167], 00:10:41.273 | 99.00th=[60556], 99.50th=[62129], 99.90th=[68682], 99.95th=[68682], 00:10:41.273 | 99.99th=[68682] 00:10:41.273 bw ( KiB/s): min=75264, max=79238, per=6.37%, avg=77251.00, stdev=2810.04, samples=2 00:10:41.273 iops : min= 588, max= 619, avg=603.50, stdev=21.92, samples=2 00:10:41.273 lat (msec) : 2=0.16%, 4=0.49%, 10=45.74%, 20=3.00%, 50=44.53% 00:10:41.273 lat (msec) : 100=6.08% 00:10:41.273 cpu : usr=0.69%, sys=2.27%, ctx=1154, majf=0, minf=1 00:10:41.273 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.5%, >=64=0.0% 00:10:41.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.273 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:41.273 issued rwts: total=600,633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.273 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:41.273 job4: (groupid=0, jobs=1): err= 0: pid=69153: Thu Jul 25 10:11:14 2024 00:10:41.273 read: IOPS=601, BW=75.2MiB/s (78.8MB/s)(77.1MiB/1026msec) 00:10:41.273 slat (usec): min=7, max=477, avg=18.06, stdev=30.39 00:10:41.273 clat (usec): min=2560, max=30844, avg=7263.96, stdev=2514.95 00:10:41.273 lat (usec): min=2571, max=30855, avg=7282.01, stdev=2513.51 00:10:41.273 clat percentiles (usec): 00:10:41.273 | 1.00th=[ 4883], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6390], 00:10:41.273 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6915], 00:10:41.273 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 8225], 95.00th=[10421], 00:10:41.273 | 99.00th=[17957], 99.50th=[27919], 99.90th=[30802], 99.95th=[30802], 00:10:41.273 | 99.99th=[30802] 00:10:41.273 bw ( KiB/s): min=76902, max=79616, per=6.61%, avg=78259.00, stdev=1919.09, samples=2 00:10:41.273 iops : min= 600, max= 622, avg=611.00, stdev=15.56, samples=2 00:10:41.273 write: IOPS=607, BW=75.9MiB/s (79.6MB/s)(77.9MiB/1026msec); 0 zone resets 00:10:41.273 slat (usec): min=9, max=1409, avg=28.69, stdev=88.51 00:10:41.273 clat (usec): min=10589, max=68792, avg=45371.24, stdev=6400.19 00:10:41.273 lat (usec): min=10616, max=68811, avg=45399.93, stdev=6401.82 00:10:41.273 clat percentiles (usec): 00:10:41.273 | 1.00th=[17171], 5.00th=[35390], 10.00th=[40109], 20.00th=[42730], 00:10:41.273 | 30.00th=[44303], 40.00th=[45351], 50.00th=[46400], 60.00th=[46924], 00:10:41.273 | 70.00th=[47973], 80.00th=[49021], 90.00th=[50594], 95.00th=[51643], 00:10:41.273 | 99.00th=[63177], 99.50th=[64750], 99.90th=[68682], 99.95th=[68682], 00:10:41.273 | 99.99th=[68682] 00:10:41.273 bw ( KiB/s): min=73728, max=78946, per=6.30%, avg=76337.00, stdev=3689.68, samples=2 00:10:41.273 iops : min= 576, max= 616, avg=596.00, stdev=28.28, samples=2 00:10:41.273 lat (msec) : 4=0.40%, 10=46.77%, 20=2.98%, 50=42.82%, 100=7.02% 00:10:41.273 cpu : usr=0.98%, sys=1.46%, ctx=1154, majf=0, minf=1 00:10:41.273 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.5%, >=64=0.0% 00:10:41.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.273 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:41.273 issued rwts: total=617,623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.273 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:41.273 job5: (groupid=0, jobs=1): err= 0: pid=69154: Thu Jul 25 10:11:14 2024 00:10:41.273 read: IOPS=593, BW=74.2MiB/s (77.8MB/s)(77.4MiB/1043msec) 00:10:41.273 slat (usec): min=6, max=1018, avg=21.50, stdev=50.53 00:10:41.273 clat (usec): min=3717, max=54314, avg=7261.40, stdev=3548.06 00:10:41.273 lat (usec): min=3748, max=54326, avg=7282.91, stdev=3545.99 00:10:41.273 clat percentiles (usec): 00:10:41.273 | 1.00th=[ 4490], 5.00th=[ 5866], 10.00th=[ 6128], 20.00th=[ 6325], 00:10:41.273 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6915], 00:10:41.273 | 70.00th=[ 7111], 80.00th=[ 7439], 90.00th=[ 7832], 95.00th=[ 9241], 00:10:41.273 | 99.00th=[17171], 99.50th=[22414], 99.90th=[54264], 99.95th=[54264], 00:10:41.273 | 99.99th=[54264] 00:10:41.273 bw ( KiB/s): min=71823, max=86016, per=6.67%, avg=78919.50, stdev=10035.97, samples=2 00:10:41.273 iops : min= 561, max= 672, avg=616.50, stdev=78.49, samples=2 00:10:41.273 write: IOPS=593, BW=74.2MiB/s (77.8MB/s)(77.4MiB/1043msec); 0 zone resets 00:10:41.273 slat (usec): min=9, max=1366, avg=28.02, stdev=68.44 00:10:41.273 clat (usec): min=5960, max=90284, avg=46521.87, stdev=9724.47 00:10:41.273 lat (usec): min=5974, max=90301, avg=46549.88, stdev=9726.81 00:10:41.273 clat percentiles (usec): 00:10:41.273 | 1.00th=[ 6063], 5.00th=[37487], 10.00th=[42206], 20.00th=[43779], 00:10:41.273 | 30.00th=[44827], 40.00th=[45351], 50.00th=[46400], 60.00th=[47449], 00:10:41.273 | 70.00th=[48497], 80.00th=[50070], 90.00th=[53216], 95.00th=[57934], 00:10:41.273 | 99.00th=[80217], 99.50th=[83362], 99.90th=[90702], 99.95th=[90702], 00:10:41.273 | 99.99th=[90702] 00:10:41.273 bw ( KiB/s): min=75520, max=75671, per=6.23%, avg=75595.50, stdev=106.77, samples=2 00:10:41.273 iops : min= 590, max= 591, avg=590.50, stdev= 0.71, samples=2 00:10:41.273 lat (msec) : 4=0.16%, 10=48.87%, 20=2.10%, 50=38.69%, 100=10.18% 00:10:41.273 cpu : usr=1.15%, sys=1.44%, ctx=1130, majf=0, minf=1 00:10:41.273 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.5%, >=64=0.0% 00:10:41.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.273 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:41.273 issued rwts: total=619,619,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.273 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:41.273 job6: (groupid=0, jobs=1): err= 0: pid=69160: Thu Jul 25 10:11:14 2024 00:10:41.273 read: IOPS=588, BW=73.6MiB/s (77.2MB/s)(76.8MiB/1043msec) 00:10:41.273 slat (usec): min=6, max=547, avg=19.47, stdev=37.90 00:10:41.273 clat (usec): min=3079, max=50146, avg=8018.30, stdev=4618.28 00:10:41.273 lat (usec): min=3088, max=50157, avg=8037.77, stdev=4616.27 00:10:41.273 clat percentiles (usec): 00:10:41.273 | 1.00th=[ 4293], 5.00th=[ 5997], 10.00th=[ 6259], 20.00th=[ 6587], 00:10:41.273 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7373], 00:10:41.273 | 70.00th=[ 7570], 80.00th=[ 7832], 90.00th=[ 9372], 95.00th=[11600], 00:10:41.273 | 99.00th=[25297], 99.50th=[46924], 99.90th=[50070], 99.95th=[50070], 00:10:41.273 | 99.99th=[50070] 00:10:41.273 bw ( KiB/s): min=72814, max=82688, per=6.57%, avg=77751.00, stdev=6981.97, samples=2 00:10:41.273 iops : min= 568, max= 646, avg=607.00, stdev=55.15, samples=2 00:10:41.273 write: IOPS=592, BW=74.1MiB/s (77.7MB/s)(77.2MiB/1043msec); 0 zone resets 00:10:41.273 slat (usec): min=7, max=527, avg=26.85, stdev=46.02 00:10:41.273 clat (usec): min=3120, max=82538, avg=45901.60, stdev=11116.78 00:10:41.273 lat (usec): min=3131, max=82553, avg=45928.45, stdev=11120.42 00:10:41.273 clat percentiles (usec): 00:10:41.273 | 1.00th=[ 5735], 5.00th=[14484], 10.00th=[39584], 20.00th=[43254], 00:10:41.273 | 30.00th=[45351], 40.00th=[46924], 50.00th=[47973], 60.00th=[49021], 00:10:41.273 | 70.00th=[50070], 80.00th=[51119], 90.00th=[53216], 95.00th=[55313], 00:10:41.273 | 99.00th=[76022], 99.50th=[78119], 99.90th=[82314], 99.95th=[82314], 00:10:41.273 | 99.99th=[82314] 00:10:41.273 bw ( KiB/s): min=75369, max=76288, per=6.25%, avg=75828.50, stdev=649.83, samples=2 00:10:41.273 iops : min= 588, max= 596, avg=592.00, stdev= 5.66, samples=2 00:10:41.273 lat (msec) : 4=0.65%, 10=47.08%, 20=3.49%, 50=33.44%, 100=15.34% 00:10:41.273 cpu : usr=1.15%, sys=1.54%, ctx=1145, majf=0, minf=1 00:10:41.273 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.5%, >=64=0.0% 00:10:41.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.273 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:41.273 issued rwts: total=614,618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.273 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:41.273 job7: (groupid=0, jobs=1): err= 0: pid=69237: Thu Jul 25 10:11:14 2024 00:10:41.273 read: IOPS=531, BW=66.5MiB/s (69.7MB/s)(68.8MiB/1034msec) 00:10:41.273 slat (usec): min=6, max=560, avg=20.05, stdev=45.20 00:10:41.273 clat (usec): min=1428, max=36219, avg=6961.22, stdev=2562.34 00:10:41.273 lat (usec): min=1436, max=36231, avg=6981.27, stdev=2561.58 00:10:41.273 clat percentiles (usec): 00:10:41.273 | 1.00th=[ 1975], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6128], 00:10:41.273 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6783], 00:10:41.273 | 70.00th=[ 6980], 80.00th=[ 7308], 90.00th=[ 7832], 95.00th=[ 8979], 00:10:41.273 | 99.00th=[19792], 99.50th=[23462], 99.90th=[36439], 99.95th=[36439], 00:10:41.273 | 99.99th=[36439] 00:10:41.273 bw ( KiB/s): min=64000, max=76288, per=5.92%, avg=70144.00, stdev=8688.93, samples=2 00:10:41.273 iops : min= 500, max= 596, avg=548.00, stdev=67.88, samples=2 00:10:41.274 write: IOPS=607, BW=75.9MiB/s (79.6MB/s)(78.5MiB/1034msec); 0 zone resets 00:10:41.274 slat (usec): min=8, max=616, avg=27.25, stdev=48.03 00:10:41.274 clat (usec): min=3008, max=75371, avg=46448.24, stdev=7848.37 00:10:41.274 lat (usec): min=3043, max=75397, avg=46475.49, stdev=7848.92 00:10:41.274 clat percentiles (usec): 00:10:41.274 | 1.00th=[ 8586], 5.00th=[38011], 10.00th=[40633], 20.00th=[42730], 00:10:41.274 | 30.00th=[44827], 40.00th=[45876], 50.00th=[46924], 60.00th=[47973], 00:10:41.274 | 70.00th=[49021], 80.00th=[50594], 90.00th=[53216], 95.00th=[55837], 00:10:41.274 | 99.00th=[64226], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:10:41.274 | 99.99th=[74974] 00:10:41.274 bw ( KiB/s): min=76032, max=77312, per=6.32%, avg=76672.00, stdev=905.10, samples=2 00:10:41.274 iops : min= 594, max= 604, avg=599.00, stdev= 7.07, samples=2 00:10:41.274 lat (msec) : 2=0.51%, 4=0.42%, 10=44.23%, 20=2.21%, 50=40.83% 00:10:41.274 lat (msec) : 100=11.80% 00:10:41.274 cpu : usr=0.97%, sys=1.55%, ctx=1110, majf=0, minf=1 00:10:41.274 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=97.4%, >=64=0.0% 00:10:41.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.274 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:41.274 issued rwts: total=550,628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.274 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:41.274 job8: (groupid=0, jobs=1): err= 0: pid=69239: Thu Jul 25 10:11:14 2024 00:10:41.274 read: IOPS=554, BW=69.4MiB/s (72.7MB/s)(71.4MiB/1029msec) 00:10:41.274 slat (usec): min=8, max=501, avg=19.81, stdev=34.94 00:10:41.274 clat (usec): min=1609, max=34213, avg=7368.11, stdev=2724.89 00:10:41.274 lat (usec): min=1619, max=34236, avg=7387.92, stdev=2723.83 00:10:41.274 clat percentiles (usec): 00:10:41.274 | 1.00th=[ 5211], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 00:10:41.274 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6980], 00:10:41.274 | 70.00th=[ 7111], 80.00th=[ 7439], 90.00th=[ 8586], 95.00th=[10028], 00:10:41.274 | 99.00th=[21103], 99.50th=[30278], 99.90th=[34341], 99.95th=[34341], 00:10:41.274 | 99.99th=[34341] 00:10:41.274 bw ( KiB/s): min=71424, max=73580, per=6.12%, avg=72502.00, stdev=1524.52, samples=2 00:10:41.274 iops : min= 558, max= 574, avg=566.00, stdev=11.31, samples=2 00:10:41.274 write: IOPS=597, BW=74.7MiB/s (78.3MB/s)(76.9MiB/1029msec); 0 zone resets 00:10:41.274 slat (usec): min=11, max=528, avg=24.76, stdev=31.06 00:10:41.274 clat (usec): min=11160, max=67865, avg=46562.45, stdev=5844.81 00:10:41.274 lat (usec): min=11187, max=67882, avg=46587.20, stdev=5845.93 00:10:41.274 clat percentiles (usec): 00:10:41.274 | 1.00th=[17433], 5.00th=[39584], 10.00th=[42730], 20.00th=[44303], 00:10:41.274 | 30.00th=[45351], 40.00th=[45876], 50.00th=[46924], 60.00th=[47973], 00:10:41.274 | 70.00th=[49021], 80.00th=[49546], 90.00th=[51119], 95.00th=[53216], 00:10:41.274 | 99.00th=[61080], 99.50th=[63177], 99.90th=[67634], 99.95th=[67634], 00:10:41.274 | 99.99th=[67634] 00:10:41.274 bw ( KiB/s): min=73472, max=76902, per=6.20%, avg=75187.00, stdev=2425.38, samples=2 00:10:41.274 iops : min= 574, max= 600, avg=587.00, stdev=18.38, samples=2 00:10:41.274 lat (msec) : 2=0.17%, 4=0.25%, 10=45.28%, 20=2.70%, 50=43.25% 00:10:41.274 lat (msec) : 100=8.35% 00:10:41.274 cpu : usr=0.88%, sys=1.85%, ctx=1101, majf=0, minf=1 00:10:41.274 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.4%, >=64=0.0% 00:10:41.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.274 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:41.274 issued rwts: total=571,615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.274 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:41.274 job9: (groupid=0, jobs=1): err= 0: pid=69240: Thu Jul 25 10:11:14 2024 00:10:41.274 read: IOPS=619, BW=77.5MiB/s (81.2MB/s)(79.2MiB/1023msec) 00:10:41.274 slat (usec): min=6, max=352, avg=17.28, stdev=26.88 00:10:41.274 clat (usec): min=4357, max=29443, avg=7612.01, stdev=2801.05 00:10:41.274 lat (usec): min=4396, max=29453, avg=7629.29, stdev=2798.89 00:10:41.274 clat percentiles (usec): 00:10:41.274 | 1.00th=[ 5276], 5.00th=[ 6128], 10.00th=[ 6259], 20.00th=[ 6456], 00:10:41.274 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7111], 00:10:41.274 | 70.00th=[ 7308], 80.00th=[ 7767], 90.00th=[ 8979], 95.00th=[11338], 00:10:41.274 | 99.00th=[24249], 99.50th=[26084], 99.90th=[29492], 99.95th=[29492], 00:10:41.274 | 99.99th=[29492] 00:10:41.274 bw ( KiB/s): min=72814, max=87296, per=6.76%, avg=80055.00, stdev=10240.32, samples=2 00:10:41.274 iops : min= 568, max= 682, avg=625.00, stdev=80.61, samples=2 00:10:41.274 write: IOPS=596, BW=74.5MiB/s (78.2MB/s)(76.2MiB/1023msec); 0 zone resets 00:10:41.274 slat (usec): min=7, max=631, avg=28.55, stdev=57.79 00:10:41.274 clat (usec): min=14456, max=64025, avg=45620.18, stdev=5051.35 00:10:41.274 lat (usec): min=14474, max=64042, avg=45648.73, stdev=5047.58 00:10:41.274 clat percentiles (usec): 00:10:41.274 | 1.00th=[26084], 5.00th=[38536], 10.00th=[40633], 20.00th=[42730], 00:10:41.274 | 30.00th=[44303], 40.00th=[45351], 50.00th=[45876], 60.00th=[46924], 00:10:41.274 | 70.00th=[47973], 80.00th=[49021], 90.00th=[50594], 95.00th=[51643], 00:10:41.274 | 99.00th=[60031], 99.50th=[61604], 99.90th=[64226], 99.95th=[64226], 00:10:41.274 | 99.99th=[64226] 00:10:41.274 bw ( KiB/s): min=71936, max=78179, per=6.19%, avg=75057.50, stdev=4414.47, samples=2 00:10:41.274 iops : min= 562, max= 610, avg=586.00, stdev=33.94, samples=2 00:10:41.274 lat (msec) : 10=46.86%, 20=3.30%, 50=43.57%, 100=6.27% 00:10:41.274 cpu : usr=0.68%, sys=1.66%, ctx=1176, majf=0, minf=1 00:10:41.274 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.5%, >=64=0.0% 00:10:41.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.274 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:41.274 issued rwts: total=634,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.274 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:41.274 job10: (groupid=0, jobs=1): err= 0: pid=69241: Thu Jul 25 10:11:14 2024 00:10:41.274 read: IOPS=595, BW=74.5MiB/s (78.1MB/s)(76.2MiB/1024msec) 00:10:41.274 slat (usec): min=6, max=496, avg=17.62, stdev=34.83 00:10:41.274 clat (usec): min=3081, max=28268, avg=7585.42, stdev=2396.83 00:10:41.274 lat (usec): min=3092, max=28278, avg=7603.04, stdev=2394.92 00:10:41.274 clat percentiles (usec): 00:10:41.274 | 1.00th=[ 5276], 5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 6587], 00:10:41.274 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7308], 00:10:41.274 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8291], 95.00th=[10683], 00:10:41.274 | 99.00th=[17695], 99.50th=[25297], 99.90th=[28181], 99.95th=[28181], 00:10:41.274 | 99.99th=[28181] 00:10:41.274 bw ( KiB/s): min=74347, max=80288, per=6.53%, avg=77317.50, stdev=4200.92, samples=2 00:10:41.274 iops : min= 580, max= 627, avg=603.50, stdev=33.23, samples=2 00:10:41.274 write: IOPS=580, BW=72.5MiB/s (76.0MB/s)(74.2MiB/1024msec); 0 zone resets 00:10:41.274 slat (usec): min=7, max=576, avg=25.55, stdev=42.39 00:10:41.274 clat (usec): min=16304, max=68197, avg=47253.69, stdev=4914.52 00:10:41.274 lat (usec): min=16320, max=68215, avg=47279.24, stdev=4916.29 00:10:41.274 clat percentiles (usec): 00:10:41.274 | 1.00th=[25297], 5.00th=[41681], 10.00th=[43779], 20.00th=[44827], 00:10:41.274 | 30.00th=[45876], 40.00th=[46924], 50.00th=[47449], 60.00th=[47973], 00:10:41.274 | 70.00th=[49021], 80.00th=[50594], 90.00th=[51643], 95.00th=[52691], 00:10:41.274 | 99.00th=[61604], 99.50th=[64750], 99.90th=[68682], 99.95th=[68682], 00:10:41.274 | 99.99th=[68682] 00:10:41.274 bw ( KiB/s): min=70284, max=75113, per=6.00%, avg=72698.50, stdev=3414.62, samples=2 00:10:41.274 iops : min= 549, max= 586, avg=567.50, stdev=26.16, samples=2 00:10:41.274 lat (msec) : 4=0.08%, 10=47.34%, 20=2.91%, 50=38.46%, 100=11.21% 00:10:41.274 cpu : usr=0.49%, sys=1.66%, ctx=1185, majf=0, minf=1 00:10:41.274 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.4%, >=64=0.0% 00:10:41.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.274 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:41.274 issued rwts: total=610,594,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.274 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:41.274 job11: (groupid=0, jobs=1): err= 0: pid=69242: Thu Jul 25 10:11:14 2024 00:10:41.274 read: IOPS=620, BW=77.5MiB/s (81.3MB/s)(79.8MiB/1029msec) 00:10:41.274 slat (usec): min=6, max=394, avg=18.73, stdev=31.92 00:10:41.274 clat (usec): min=5451, max=29595, avg=7262.17, stdev=1925.19 00:10:41.274 lat (usec): min=5461, max=29610, avg=7280.90, stdev=1922.95 00:10:41.274 clat percentiles (usec): 00:10:41.274 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6390], 00:10:41.274 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6980], 00:10:41.274 | 70.00th=[ 7242], 80.00th=[ 7570], 90.00th=[ 8455], 95.00th=[10552], 00:10:41.274 | 99.00th=[13435], 99.50th=[19006], 99.90th=[29492], 99.95th=[29492], 00:10:41.274 | 99.99th=[29492] 00:10:41.274 bw ( KiB/s): min=77210, max=85588, per=6.88%, avg=81399.00, stdev=5924.14, samples=2 00:10:41.274 iops : min= 603, max= 668, avg=635.50, stdev=45.96, samples=2 00:10:41.274 write: IOPS=614, BW=76.8MiB/s (80.5MB/s)(79.0MiB/1029msec); 0 zone resets 00:10:41.274 slat (usec): min=11, max=1233, avg=28.91, stdev=75.42 00:10:41.274 clat (usec): min=15246, max=62437, avg=44611.41, stdev=5199.40 00:10:41.274 lat (usec): min=15266, max=62454, avg=44640.32, stdev=5203.28 00:10:41.274 clat percentiles (usec): 00:10:41.274 | 1.00th=[29754], 5.00th=[35914], 10.00th=[38536], 20.00th=[41157], 00:10:41.274 | 30.00th=[42730], 40.00th=[43779], 50.00th=[45351], 60.00th=[46400], 00:10:41.274 | 70.00th=[47449], 80.00th=[48497], 90.00th=[50070], 95.00th=[51119], 00:10:41.274 | 99.00th=[55313], 99.50th=[59507], 99.90th=[62653], 99.95th=[62653], 00:10:41.274 | 99.99th=[62653] 00:10:41.274 bw ( KiB/s): min=73362, max=80734, per=6.35%, avg=77048.00, stdev=5212.79, samples=2 00:10:41.274 iops : min= 573, max= 630, avg=601.50, stdev=40.31, samples=2 00:10:41.274 lat (msec) : 10=47.24%, 20=3.07%, 50=44.49%, 100=5.20% 00:10:41.274 cpu : usr=0.88%, sys=1.85%, ctx=1169, majf=0, minf=1 00:10:41.274 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.6%, >=64=0.0% 00:10:41.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.274 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:41.275 issued rwts: total=638,632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.275 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:41.275 job12: (groupid=0, jobs=1): err= 0: pid=69243: Thu Jul 25 10:11:14 2024 00:10:41.275 read: IOPS=626, BW=78.3MiB/s (82.1MB/s)(81.4MiB/1039msec) 00:10:41.275 slat (usec): min=6, max=766, avg=21.88, stdev=59.46 00:10:41.275 clat (usec): min=434, max=49374, avg=7835.10, stdev=4566.66 00:10:41.275 lat (usec): min=1201, max=49399, avg=7856.98, stdev=4565.70 00:10:41.275 clat percentiles (usec): 00:10:41.275 | 1.00th=[ 5669], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6456], 00:10:41.275 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:10:41.275 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 8586], 95.00th=[11994], 00:10:41.275 | 99.00th=[23987], 99.50th=[43779], 99.90th=[49546], 99.95th=[49546], 00:10:41.275 | 99.99th=[49546] 00:10:41.275 bw ( KiB/s): min=68864, max=96063, per=6.97%, avg=82463.50, stdev=19232.60, samples=2 00:10:41.275 iops : min= 538, max= 750, avg=644.00, stdev=149.91, samples=2 00:10:41.275 write: IOPS=597, BW=74.7MiB/s (78.3MB/s)(77.6MiB/1039msec); 0 zone resets 00:10:41.275 slat (usec): min=9, max=14486, avg=51.00, stdev=582.29 00:10:41.275 clat (usec): min=7511, max=79071, avg=45164.67, stdev=9884.13 00:10:41.275 lat (usec): min=7527, max=79103, avg=45215.67, stdev=9815.69 00:10:41.275 clat percentiles (usec): 00:10:41.275 | 1.00th=[ 7635], 5.00th=[23725], 10.00th=[41157], 20.00th=[43254], 00:10:41.275 | 30.00th=[44303], 40.00th=[44827], 50.00th=[45876], 60.00th=[46924], 00:10:41.275 | 70.00th=[47973], 80.00th=[49021], 90.00th=[51119], 95.00th=[59507], 00:10:41.275 | 99.00th=[69731], 99.50th=[73925], 99.90th=[79168], 99.95th=[79168], 00:10:41.275 | 99.99th=[79168] 00:10:41.275 bw ( KiB/s): min=75520, max=76646, per=6.27%, avg=76083.00, stdev=796.20, samples=2 00:10:41.275 iops : min= 590, max= 598, avg=594.00, stdev= 5.66, samples=2 00:10:41.275 lat (usec) : 500=0.08% 00:10:41.275 lat (msec) : 10=49.06%, 20=2.52%, 50=41.35%, 100=7.00% 00:10:41.275 cpu : usr=0.77%, sys=1.83%, ctx=1180, majf=0, minf=1 00:10:41.275 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.6%, >=64=0.0% 00:10:41.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.275 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:41.275 issued rwts: total=651,621,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.275 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:41.275 job13: (groupid=0, jobs=1): err= 0: pid=69244: Thu Jul 25 10:11:14 2024 00:10:41.275 read: IOPS=589, BW=73.7MiB/s (77.3MB/s)(75.8MiB/1028msec) 00:10:41.275 slat (usec): min=6, max=5368, avg=29.46, stdev=224.71 00:10:41.275 clat (usec): min=2071, max=31048, avg=7373.22, stdev=3136.15 00:10:41.275 lat (usec): min=3143, max=31057, avg=7402.68, stdev=3127.72 00:10:41.275 clat percentiles (usec): 00:10:41.275 | 1.00th=[ 3687], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 6325], 00:10:41.275 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6980], 00:10:41.275 | 70.00th=[ 7111], 80.00th=[ 7439], 90.00th=[ 8094], 95.00th=[ 9765], 00:10:41.275 | 99.00th=[26084], 99.50th=[28967], 99.90th=[31065], 99.95th=[31065], 00:10:41.275 | 99.99th=[31065] 00:10:41.275 bw ( KiB/s): min=69632, max=84224, per=6.50%, avg=76928.00, stdev=10318.10, samples=2 00:10:41.275 iops : min= 544, max= 658, avg=601.00, stdev=80.61, samples=2 00:10:41.275 write: IOPS=599, BW=74.9MiB/s (78.5MB/s)(77.0MiB/1028msec); 0 zone resets 00:10:41.275 slat (usec): min=9, max=566, avg=23.75, stdev=33.39 00:10:41.275 clat (usec): min=6093, max=67128, avg=46035.76, stdev=5896.68 00:10:41.275 lat (usec): min=6105, max=67144, avg=46059.51, stdev=5897.95 00:10:41.275 clat percentiles (usec): 00:10:41.275 | 1.00th=[21103], 5.00th=[38536], 10.00th=[40633], 20.00th=[43254], 00:10:41.275 | 30.00th=[44827], 40.00th=[45876], 50.00th=[46924], 60.00th=[47449], 00:10:41.275 | 70.00th=[47973], 80.00th=[49021], 90.00th=[51119], 95.00th=[52691], 00:10:41.275 | 99.00th=[58983], 99.50th=[64226], 99.90th=[67634], 99.95th=[67634], 00:10:41.275 | 99.99th=[67634] 00:10:41.275 bw ( KiB/s): min=72448, max=78592, per=6.23%, avg=75520.00, stdev=4344.46, samples=2 00:10:41.275 iops : min= 566, max= 614, avg=590.00, stdev=33.94, samples=2 00:10:41.275 lat (msec) : 4=0.65%, 10=46.97%, 20=1.55%, 50=43.04%, 100=7.77% 00:10:41.275 cpu : usr=0.68%, sys=1.75%, ctx=1110, majf=0, minf=1 00:10:41.275 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.5%, >=64=0.0% 00:10:41.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.275 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:41.275 issued rwts: total=606,616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.275 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:41.275 job14: (groupid=0, jobs=1): err= 0: pid=69245: Thu Jul 25 10:11:14 2024 00:10:41.275 read: IOPS=572, BW=71.6MiB/s (75.1MB/s)(73.8MiB/1030msec) 00:10:41.275 slat (usec): min=6, max=941, avg=20.29, stdev=44.70 00:10:41.275 clat (usec): min=719, max=35503, avg=7496.60, stdev=2849.50 00:10:41.275 lat (usec): min=734, max=35515, avg=7516.89, stdev=2847.91 00:10:41.275 clat percentiles (usec): 00:10:41.275 | 1.00th=[ 2180], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6587], 00:10:41.275 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7242], 00:10:41.275 | 70.00th=[ 7439], 80.00th=[ 7635], 90.00th=[ 8225], 95.00th=[10683], 00:10:41.275 | 99.00th=[23200], 99.50th=[32900], 99.90th=[35390], 99.95th=[35390], 00:10:41.275 | 99.99th=[35390] 00:10:41.275 bw ( KiB/s): min=70400, max=79519, per=6.33%, avg=74959.50, stdev=6448.11, samples=2 00:10:41.275 iops : min= 550, max= 621, avg=585.50, stdev=50.20, samples=2 00:10:41.275 write: IOPS=597, BW=74.6MiB/s (78.3MB/s)(76.9MiB/1030msec); 0 zone resets 00:10:41.275 slat (usec): min=7, max=1311, avg=34.18, stdev=92.31 00:10:41.275 clat (usec): min=3836, max=73114, avg=46242.42, stdev=8867.26 00:10:41.275 lat (usec): min=3845, max=73128, avg=46276.60, stdev=8870.15 00:10:41.275 clat percentiles (usec): 00:10:41.275 | 1.00th=[11994], 5.00th=[25035], 10.00th=[41157], 20.00th=[44303], 00:10:41.275 | 30.00th=[45351], 40.00th=[46924], 50.00th=[47449], 60.00th=[48497], 00:10:41.275 | 70.00th=[49546], 80.00th=[51119], 90.00th=[52167], 95.00th=[54264], 00:10:41.275 | 99.00th=[66323], 99.50th=[71828], 99.90th=[72877], 99.95th=[72877], 00:10:41.275 | 99.99th=[72877] 00:10:41.275 bw ( KiB/s): min=75158, max=75776, per=6.22%, avg=75467.00, stdev=436.99, samples=2 00:10:41.275 iops : min= 587, max= 592, avg=589.50, stdev= 3.54, samples=2 00:10:41.275 lat (usec) : 750=0.08% 00:10:41.275 lat (msec) : 2=0.17%, 4=0.58%, 10=45.73%, 20=4.15%, 50=35.52% 00:10:41.275 lat (msec) : 100=13.78% 00:10:41.275 cpu : usr=1.26%, sys=1.55%, ctx=1055, majf=0, minf=1 00:10:41.275 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.4%, >=64=0.0% 00:10:41.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.275 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:41.275 issued rwts: total=590,615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.275 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:41.275 job15: (groupid=0, jobs=1): err= 0: pid=69246: Thu Jul 25 10:11:14 2024 00:10:41.275 read: IOPS=578, BW=72.3MiB/s (75.8MB/s)(74.8MiB/1034msec) 00:10:41.275 slat (usec): min=6, max=910, avg=20.56, stdev=61.05 00:10:41.275 clat (usec): min=2589, max=37082, avg=7262.79, stdev=2899.66 00:10:41.275 lat (usec): min=2609, max=37093, avg=7283.34, stdev=2896.42 00:10:41.275 clat percentiles (usec): 00:10:41.275 | 1.00th=[ 4883], 5.00th=[ 5604], 10.00th=[ 5997], 20.00th=[ 6259], 00:10:41.275 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6783], 60.00th=[ 6980], 00:10:41.275 | 70.00th=[ 7177], 80.00th=[ 7504], 90.00th=[ 8455], 95.00th=[ 9765], 00:10:41.275 | 99.00th=[20317], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:10:41.275 | 99.99th=[36963] 00:10:41.275 bw ( KiB/s): min=71936, max=80128, per=6.42%, avg=76032.00, stdev=5792.62, samples=2 00:10:41.275 iops : min= 562, max= 626, avg=594.00, stdev=45.25, samples=2 00:10:41.275 write: IOPS=604, BW=75.6MiB/s (79.2MB/s)(78.1MiB/1034msec); 0 zone resets 00:10:41.275 slat (usec): min=7, max=1288, avg=26.81, stdev=71.14 00:10:41.275 clat (usec): min=5353, max=72124, avg=45814.30, stdev=7349.66 00:10:41.275 lat (usec): min=5389, max=72138, avg=45841.10, stdev=7341.00 00:10:41.275 clat percentiles (usec): 00:10:41.275 | 1.00th=[12387], 5.00th=[38011], 10.00th=[41681], 20.00th=[43779], 00:10:41.275 | 30.00th=[44303], 40.00th=[45351], 50.00th=[46400], 60.00th=[46924], 00:10:41.275 | 70.00th=[47973], 80.00th=[49021], 90.00th=[51119], 95.00th=[55837], 00:10:41.275 | 99.00th=[67634], 99.50th=[68682], 99.90th=[71828], 99.95th=[71828], 00:10:41.275 | 99.99th=[71828] 00:10:41.275 bw ( KiB/s): min=75776, max=77312, per=6.31%, avg=76544.00, stdev=1086.12, samples=2 00:10:41.275 iops : min= 592, max= 604, avg=598.00, stdev= 8.49, samples=2 00:10:41.275 lat (msec) : 4=0.16%, 10=47.02%, 20=1.96%, 50=43.91%, 100=6.95% 00:10:41.275 cpu : usr=0.48%, sys=1.65%, ctx=1132, majf=0, minf=1 00:10:41.275 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.5%, >=64=0.0% 00:10:41.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.275 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:41.275 issued rwts: total=598,625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.275 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:41.275 00:10:41.275 Run status group 0 (all jobs): 00:10:41.275 READ: bw=1156MiB/s (1212MB/s), 66.5MiB/s-78.3MiB/s (69.7MB/s-82.1MB/s), io=1206MiB (1264MB), run=1015-1043msec 00:10:41.275 WRITE: bw=1184MiB/s (1242MB/s), 72.5MiB/s-78.0MiB/s (76.0MB/s-81.7MB/s), io=1235MiB (1295MB), run=1015-1043msec 00:10:41.275 00:10:41.275 Disk stats (read/write): 00:10:41.275 sda: ios=581/550, merge=0/0, ticks=3785/25138, in_queue=28924, util=77.31% 00:10:41.275 sdb: ios=571/548, merge=0/0, ticks=3813/25164, in_queue=28978, util=77.36% 00:10:41.275 sdc: ios=604/518, merge=0/0, ticks=4101/24641, in_queue=28743, util=77.04% 00:10:41.275 sdd: ios=604/555, merge=0/0, ticks=3784/24546, in_queue=28331, util=76.40% 00:10:41.275 sde: ios=623/556, merge=0/0, ticks=4046/25005, in_queue=29051, util=79.60% 00:10:41.275 sdf: ios=635/562, merge=0/0, ticks=4102/25389, in_queue=29492, util=80.84% 00:10:41.275 sdg: ios=596/563, merge=0/0, ticks=4243/25176, in_queue=29420, util=81.41% 00:10:41.276 sdh: ios=517/557, merge=0/0, ticks=3472/25667, in_queue=29140, util=82.45% 00:10:41.276 sdi: ios=525/547, merge=0/0, ticks=3712/25270, in_queue=28983, util=82.95% 00:10:41.276 sdj: ios=580/539, merge=0/0, ticks=4212/24578, in_queue=28791, util=83.41% 00:10:41.276 sdk: ios=563/521, merge=0/0, ticks=4137/24588, in_queue=28726, util=83.77% 00:10:41.276 sdl: ios=588/552, merge=0/0, ticks=4207/24601, in_queue=28808, util=84.85% 00:10:41.276 sdm: ios=611/563, merge=0/0, ticks=4170/24747, in_queue=28918, util=87.28% 00:10:41.276 sdn: ios=569/548, merge=0/0, ticks=4004/25025, in_queue=29030, util=87.24% 00:10:41.276 sdo: ios=558/550, merge=0/0, ticks=3956/25161, in_queue=29118, util=87.23% 00:10:41.276 sdp: ios=551/557, merge=0/0, ticks=3808/25252, in_queue=29061, util=88.99% 00:10:41.276 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@82 -- # iscsicleanup 00:10:41.276 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:10:41.276 Cleaning up iSCSI connection 00:10:41.276 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:10:41.843 Logging out of session [sid: 14, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:10:41.843 Logging out of session [sid: 15, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:41.843 Logging out of session [sid: 16, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:10:41.843 Logging out of session [sid: 17, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:10:41.843 Logging out of session [sid: 18, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:10:41.843 Logging out of session [sid: 19, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:10:41.843 Logging out of session [sid: 20, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:10:41.843 Logging out of session [sid: 21, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:10:41.843 Logging out of session [sid: 22, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:10:41.843 Logging out of session [sid: 23, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:10:41.843 Logging out of session [sid: 24, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:10:41.843 Logging out of session [sid: 25, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:10:41.843 Logging out of session [sid: 26, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:10:41.843 Logging out of session [sid: 27, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:10:41.843 Logging out of session [sid: 28, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:10:41.843 Logging out of session [sid: 29, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:10:41.843 Logout of [sid: 14, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:10:41.843 Logout of [sid: 15, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:41.843 Logout of [sid: 16, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:10:41.843 Logout of [sid: 17, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:10:41.843 Logout of [sid: 18, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:10:41.843 Logout of [sid: 19, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:10:41.843 Logout of [sid: 20, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:10:41.843 Logout of [sid: 21, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:10:41.843 Logout of [sid: 22, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:10:41.843 Logout of [sid: 23, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:10:41.843 Logout of [sid: 24, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:10:41.843 Logout of [sid: 25, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:10:41.843 Logout of [sid: 26, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:10:41.843 Logout of [sid: 27, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:10:41.843 Logout of [sid: 28, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:10:41.843 Logout of [sid: 29, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@983 -- # rm -rf 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@84 -- # RPCS= 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # seq 0 15 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target0\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc0\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target1\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc1\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target2\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc2\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target3\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc3\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target4\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc4\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target5\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc5\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target6\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc6\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target7\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc7\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target8\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc8\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target9\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc9\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target10\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc10\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target11\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc11\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target12\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc12\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target13\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc13\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target14\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc14\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target15\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc15\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # echo -e iscsi_delete_target_node 'iqn.2016-06.io.spdk:Target0\nbdev_malloc_delete' 'Malloc0\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target1\nbdev_malloc_delete' 'Malloc1\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target2\nbdev_malloc_delete' 'Malloc2\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target3\nbdev_malloc_delete' 'Malloc3\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target4\nbdev_malloc_delete' 'Malloc4\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target5\nbdev_malloc_delete' 'Malloc5\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target6\nbdev_malloc_delete' 'Malloc6\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target7\nbdev_malloc_delete' 'Malloc7\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target8\nbdev_malloc_delete' 'Malloc8\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target9\nbdev_malloc_delete' 'Malloc9\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target10\nbdev_malloc_delete' 'Malloc10\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target11\nbdev_malloc_delete' 'Malloc11\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target12\nbdev_malloc_delete' 'Malloc12\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target13\nbdev_malloc_delete' 'Malloc13\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target14\nbdev_malloc_delete' 'Malloc14\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target15\nbdev_malloc_delete' 'Malloc15\n' 00:10:41.843 10:11:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:42.411 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@92 -- # trap 'delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:42.411 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@94 -- # killprocess 68722 00:10:42.411 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@948 -- # '[' -z 68722 ']' 00:10:42.411 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@952 -- # kill -0 68722 00:10:42.411 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # uname 00:10:42.411 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:42.411 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68722 00:10:42.411 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:42.411 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:42.411 killing process with pid 68722 00:10:42.411 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68722' 00:10:42.411 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@967 -- # kill 68722 00:10:42.411 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@972 -- # wait 68722 00:10:42.979 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@95 -- # killprocess 68757 00:10:42.979 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@948 -- # '[' -z 68757 ']' 00:10:42.979 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@952 -- # kill -0 68757 00:10:42.979 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # uname 00:10:42.979 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:42.979 10:11:15 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68757 00:10:42.979 10:11:16 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # process_name=spdk_trace_reco 00:10:42.979 10:11:16 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@958 -- # '[' spdk_trace_reco = sudo ']' 00:10:42.979 killing process with pid 68757 00:10:42.979 10:11:16 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68757' 00:10:42.979 10:11:16 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@967 -- # kill 68757 00:10:42.979 10:11:16 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@972 -- # wait 68757 00:10:42.979 10:11:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@96 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace -f ./tmp-trace/record.trace 00:10:55.181 10:11:27 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # grep 'trace entries for lcore' ./tmp-trace/record.notice 00:10:55.181 10:11:27 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # cut -d ' ' -f 2 00:10:55.181 10:11:27 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # record_num='168283 00:10:55.181 169256 00:10:55.181 169227 00:10:55.181 169960' 00:10:55.181 10:11:27 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # grep 'Trace Size of lcore' ./tmp-trace/trace.log 00:10:55.181 10:11:27 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # cut -d ' ' -f 6 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # trace_tool_num='168283 00:10:55.181 169256 00:10:55.181 169227 00:10:55.181 169960' 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@105 -- # delete_tmp_files 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@19 -- # rm -rf ./tmp-trace 00:10:55.181 entries numbers from trace record are: 168283 169256 169227 169960 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@107 -- # echo 'entries numbers from trace record are:' 168283 169256 169227 169960 00:10:55.181 entries numbers from trace tool are: 168283 169256 169227 169960 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@108 -- # echo 'entries numbers from trace tool are:' 168283 169256 169227 169960 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@110 -- # arr_record_num=($record_num) 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@111 -- # arr_trace_tool_num=($trace_tool_num) 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@112 -- # len_arr_record_num=4 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@113 -- # len_arr_trace_tool_num=4 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@116 -- # '[' 4 -ne 4 ']' 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # seq 0 3 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 168283 -le 4096 ']' 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 168283 -ne 168283 ']' 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 169256 -le 4096 ']' 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 169256 -ne 169256 ']' 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 169227 -le 4096 ']' 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 169227 -ne 169227 ']' 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 169960 -le 4096 ']' 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 169960 -ne 169960 ']' 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@135 -- # trap - SIGINT SIGTERM EXIT 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@136 -- # iscsitestfini 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:10:55.181 00:10:55.181 real 0m19.120s 00:10:55.181 user 0m40.845s 00:10:55.181 sys 0m3.699s 00:10:55.181 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:10:55.182 ************************************ 00:10:55.182 END TEST iscsi_tgt_trace_record 00:10:55.182 ************************************ 00:10:55.182 10:11:28 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:10:55.182 10:11:28 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@41 -- # run_test iscsi_tgt_login_redirection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:10:55.182 10:11:28 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:55.182 10:11:28 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.182 10:11:28 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:10:55.182 ************************************ 00:10:55.182 START TEST iscsi_tgt_login_redirection 00:10:55.182 ************************************ 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:10:55.182 * Looking for test storage... 00:10:55.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@12 -- # iscsitestinit 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@14 -- # NULL_BDEV_SIZE=64 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@15 -- # NULL_BLOCK_SIZE=512 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@17 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@20 -- # rpc_addr1=/var/tmp/spdk0.sock 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@21 -- # rpc_addr2=/var/tmp/spdk1.sock 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@25 -- # timing_enter start_iscsi_tgts 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@28 -- # pid1=69589 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@29 -- # echo 'Process pid: 69589' 00:10:55.182 Process pid: 69589 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@27 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -i 0 -m 0x1 --wait-for-rpc 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@32 -- # pid2=69590 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@33 -- # echo 'Process pid: 69590' 00:10:55.182 Process pid: 69590 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@35 -- # trap 'killprocess $pid1; killprocess $pid2; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@37 -- # waitforlisten 69589 /var/tmp/spdk0.sock 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@829 -- # '[' -z 69589 ']' 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk0.sock 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -i 1 -m 0x2 --wait-for-rpc 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.182 10:11:28 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:10:55.182 [2024-07-25 10:11:28.360402] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:10:55.182 [2024-07-25 10:11:28.360535] [ DPDK EAL parameters: iscsi -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.182 [2024-07-25 10:11:28.360958] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:10:55.182 [2024-07-25 10:11:28.361044] [ DPDK EAL parameters: iscsi -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:55.440 [2024-07-25 10:11:28.510823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.440 [2024-07-25 10:11:28.516316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.440 [2024-07-25 10:11:28.613552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.440 [2024-07-25 10:11:28.639947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.374 10:11:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:56.374 10:11:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@862 -- # return 0 00:10:56.374 10:11:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_set_options -w 0 -o 30 -a 16 00:10:56.374 10:11:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock framework_start_init 00:10:56.632 iscsi_tgt_1 is listening. 00:10:56.632 10:11:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@40 -- # echo 'iscsi_tgt_1 is listening.' 00:10:56.632 10:11:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@42 -- # waitforlisten 69590 /var/tmp/spdk1.sock 00:10:56.632 10:11:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@829 -- # '[' -z 69590 ']' 00:10:56.632 10:11:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk1.sock 00:10:56.632 10:11:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:56.632 10:11:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:10:56.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:10:56.632 10:11:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:56.632 10:11:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:10:56.890 10:11:30 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:56.890 10:11:30 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@862 -- # return 0 00:10:56.890 10:11:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_set_options -w 0 -o 30 -a 16 00:10:57.147 10:11:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock framework_start_init 00:10:57.405 iscsi_tgt_2 is listening. 00:10:57.405 10:11:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@45 -- # echo 'iscsi_tgt_2 is listening.' 00:10:57.405 10:11:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@47 -- # timing_exit start_iscsi_tgts 00:10:57.405 10:11:30 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:57.405 10:11:30 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:10:57.405 10:11:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:10:57.662 10:11:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 10.0.0.1:3260 00:10:57.919 10:11:31 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock bdev_null_create Null0 64 512 00:10:58.177 Null0 00:10:58.177 10:11:31 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:10:58.177 10:11:31 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:10:58.434 10:11:31 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 10.0.0.3:3260 -p 00:10:58.691 10:11:31 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock bdev_null_create Null0 64 512 00:10:58.691 Null0 00:10:58.691 10:11:31 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:10:58.948 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@67 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:10:58.948 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:10:58.948 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@68 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:10:58.948 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:58.948 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:58.948 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@69 -- # waitforiscsidevices 1 00:10:58.948 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@116 -- # local num=1 00:10:58.948 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:58.948 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:58.949 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:58.949 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:58.949 [2024-07-25 10:11:32.155337] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:58.949 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # n=1 00:10:58.949 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:10:58.949 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@123 -- # return 0 00:10:58.949 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@72 -- # fiopid=69682 00:10:58.949 FIO pid: 69682 00:10:58.949 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@73 -- # echo 'FIO pid: 69682' 00:10:58.949 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t randrw -r 15 00:10:58.949 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@75 -- # trap 'iscsicleanup; killprocess $pid1; killprocess $pid2; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:58.949 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:10:58.949 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # jq length 00:10:58.949 [global] 00:10:58.949 thread=1 00:10:58.949 invalidate=1 00:10:58.949 rw=randrw 00:10:58.949 time_based=1 00:10:58.949 runtime=15 00:10:58.949 ioengine=libaio 00:10:58.949 direct=1 00:10:58.949 bs=512 00:10:58.949 iodepth=1 00:10:58.949 norandommap=1 00:10:58.949 numjobs=1 00:10:58.949 00:10:58.949 [job0] 00:10:58.949 filename=/dev/sda 00:10:59.205 queue_depth set to 113 (sda) 00:10:59.206 job0: (g=0): rw=randrw, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:10:59.206 fio-3.35 00:10:59.206 Starting 1 thread 00:10:59.206 [2024-07-25 10:11:32.334808] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:59.206 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # '[' 1 = 1 ']' 00:10:59.206 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:10:59.206 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # jq length 00:10:59.464 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # '[' 0 = 0 ']' 00:10:59.464 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 -a 10.0.0.3 -p 3260 00:10:59.722 10:11:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:10:59.980 10:11:33 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@85 -- # sleep 5 00:11:05.249 10:11:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:11:05.249 10:11:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # jq length 00:11:05.249 10:11:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # '[' 0 = 0 ']' 00:11:05.249 10:11:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:11:05.249 10:11:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # jq length 00:11:05.508 10:11:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # '[' 1 = 1 ']' 00:11:05.508 10:11:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 00:11:05.508 10:11:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:11:05.766 10:11:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@93 -- # sleep 5 00:11:11.032 10:11:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:11:11.032 10:11:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # jq length 00:11:11.032 10:11:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # '[' 1 = 1 ']' 00:11:11.032 10:11:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:11:11.032 10:11:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # jq length 00:11:11.291 10:11:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # '[' 0 = 0 ']' 00:11:11.291 10:11:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@98 -- # wait 69682 00:11:14.586 [2024-07-25 10:11:47.441962] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:14.586 00:11:14.586 job0: (groupid=0, jobs=1): err= 0: pid=69714: Thu Jul 25 10:11:47 2024 00:11:14.586 read: IOPS=6193, BW=3097KiB/s (3171kB/s)(45.4MiB/15001msec) 00:11:14.586 slat (usec): min=3, max=131, avg= 5.63, stdev= 1.44 00:11:14.586 clat (nsec): min=1573, max=2006.9M, avg=74400.30, stdev=6584084.61 00:11:14.586 lat (usec): min=47, max=2006.9k, avg=80.03, stdev=6584.13 00:11:14.586 clat percentiles (usec): 00:11:14.586 | 1.00th=[ 46], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 49], 00:11:14.586 | 30.00th=[ 49], 40.00th=[ 50], 50.00th=[ 52], 60.00th=[ 53], 00:11:14.586 | 70.00th=[ 54], 80.00th=[ 56], 90.00th=[ 60], 95.00th=[ 63], 00:11:14.586 | 99.00th=[ 75], 99.50th=[ 80], 99.90th=[ 126], 99.95th=[ 167], 00:11:14.586 | 99.99th=[ 586] 00:11:14.586 bw ( KiB/s): min= 491, max= 4453, per=100.00%, avg=3853.57, stdev=1045.42, samples=23 00:11:14.586 iops : min= 982, max= 8906, avg=7707.13, stdev=2090.85, samples=23 00:11:14.586 write: IOPS=6173, BW=3087KiB/s (3161kB/s)(45.2MiB/15001msec); 0 zone resets 00:11:14.586 slat (nsec): min=3452, max=89974, avg=5511.79, stdev=1360.71 00:11:14.586 clat (usec): min=40, max=2006.2k, avg=75.23, stdev=6592.31 00:11:14.586 lat (usec): min=48, max=2006.2k, avg=80.74, stdev=6592.30 00:11:14.586 clat percentiles (usec): 00:11:14.586 | 1.00th=[ 47], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 50], 00:11:14.586 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 52], 60.00th=[ 55], 00:11:14.586 | 70.00th=[ 55], 80.00th=[ 56], 90.00th=[ 61], 95.00th=[ 64], 00:11:14.586 | 99.00th=[ 76], 99.50th=[ 82], 99.90th=[ 121], 99.95th=[ 172], 00:11:14.586 | 99.99th=[ 603] 00:11:14.586 bw ( KiB/s): min= 477, max= 4480, per=100.00%, avg=3842.09, stdev=1052.41, samples=23 00:11:14.586 iops : min= 954, max= 8960, avg=7684.17, stdev=2104.83, samples=23 00:11:14.586 lat (usec) : 2=0.01%, 50=43.72%, 100=56.13%, 250=0.12%, 500=0.01% 00:11:14.586 lat (usec) : 750=0.01%, 1000=0.01% 00:11:14.586 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:11:14.586 cpu : usr=2.97%, sys=9.95%, ctx=185528, majf=0, minf=1 00:11:14.586 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.586 issued rwts: total=92902,92606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.586 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.586 00:11:14.586 Run status group 0 (all jobs): 00:11:14.586 READ: bw=3097KiB/s (3171kB/s), 3097KiB/s-3097KiB/s (3171kB/s-3171kB/s), io=45.4MiB (47.6MB), run=15001-15001msec 00:11:14.586 WRITE: bw=3087KiB/s (3161kB/s), 3087KiB/s-3087KiB/s (3161kB/s-3161kB/s), io=45.2MiB (47.4MB), run=15001-15001msec 00:11:14.586 00:11:14.586 Disk stats (read/write): 00:11:14.586 sda: ios=92026/91700, merge=0/0, ticks=6764/6808, in_queue=13572, util=99.32% 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@100 -- # trap - SIGINT SIGTERM EXIT 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@102 -- # iscsicleanup 00:11:14.587 Cleaning up iSCSI connection 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:11:14.587 Logging out of session [sid: 30, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:14.587 Logout of [sid: 30, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@983 -- # rm -rf 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@103 -- # killprocess 69589 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@948 -- # '[' -z 69589 ']' 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@952 -- # kill -0 69589 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # uname 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69589 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:14.587 killing process with pid 69589 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69589' 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@967 -- # kill 69589 00:11:14.587 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@972 -- # wait 69589 00:11:14.845 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@104 -- # killprocess 69590 00:11:14.845 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@948 -- # '[' -z 69590 ']' 00:11:14.845 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@952 -- # kill -0 69590 00:11:14.845 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # uname 00:11:14.845 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:14.845 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69590 00:11:14.845 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:14.845 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:14.845 killing process with pid 69590 00:11:14.845 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69590' 00:11:14.845 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@967 -- # kill 69590 00:11:14.845 10:11:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@972 -- # wait 69590 00:11:15.103 10:11:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@105 -- # iscsitestfini 00:11:15.103 10:11:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:15.103 00:11:15.103 real 0m20.061s 00:11:15.103 user 0m39.374s 00:11:15.103 sys 0m5.963s 00:11:15.103 10:11:48 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:15.103 10:11:48 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:11:15.103 ************************************ 00:11:15.103 END TEST iscsi_tgt_login_redirection 00:11:15.103 ************************************ 00:11:15.103 10:11:48 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:11:15.103 10:11:48 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@42 -- # run_test iscsi_tgt_digests /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:11:15.103 10:11:48 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:15.103 10:11:48 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.103 10:11:48 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:15.103 ************************************ 00:11:15.103 START TEST iscsi_tgt_digests 00:11:15.103 ************************************ 00:11:15.103 10:11:48 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:11:15.361 * Looking for test storage... 00:11:15.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@11 -- # iscsitestinit 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@52 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@54 -- # timing_enter start_iscsi_tgt 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@57 -- # pid=69974 00:11:15.361 Process pid: 69974 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@58 -- # echo 'Process pid: 69974' 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@60 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@56 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@62 -- # waitforlisten 69974 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@829 -- # '[' -z 69974 ']' 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.361 10:11:48 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.362 10:11:48 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:15.362 [2024-07-25 10:11:48.444218] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:11:15.362 [2024-07-25 10:11:48.444295] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69974 ] 00:11:15.362 [2024-07-25 10:11:48.580083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.619 [2024-07-25 10:11:48.685106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.619 [2024-07-25 10:11:48.685302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.619 [2024-07-25 10:11:48.685486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.619 [2024-07-25 10:11:48.685336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.185 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:16.185 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@862 -- # return 0 00:11:16.185 10:11:49 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@63 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:11:16.185 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.185 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:16.185 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.185 10:11:49 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@64 -- # rpc_cmd framework_start_init 00:11:16.185 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.185 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.443 iscsi_tgt is listening. Running tests... 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@65 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@67 -- # timing_exit start_iscsi_tgt 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@69 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@70 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@71 -- # rpc_cmd bdev_malloc_create 64 512 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:16.443 Malloc0 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@76 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.443 10:11:49 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@77 -- # sleep 1 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@79 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:11:17.825 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.DataDigest' -v None 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # true 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # DataDigestAbility='iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:11:17.825 iscsiadm: Could not execute operation on all records: invalid parameter' 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@84 -- # '[' 'iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:11:17.825 iscsiadm: Could not execute operation on all records: invalid parameterx' '!=' x ']' 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@85 -- # run_test iscsi_tgt_digest iscsi_header_digest_test 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:17.825 ************************************ 00:11:17.825 START TEST iscsi_tgt_digest 00:11:17.825 ************************************ 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1123 -- # iscsi_header_digest_test 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@27 -- # node_login_fio_logout 'HeaderDigest -v CRC32C' 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:11:17.825 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:11:17.825 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:17.825 [2024-07-25 10:11:50.767898] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:11:17.825 10:11:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:11:17.825 [global] 00:11:17.825 thread=1 00:11:17.825 invalidate=1 00:11:17.825 rw=write 00:11:17.825 time_based=1 00:11:17.825 runtime=2 00:11:17.825 ioengine=libaio 00:11:17.825 direct=1 00:11:17.825 bs=512 00:11:17.825 iodepth=1 00:11:17.825 norandommap=1 00:11:17.825 numjobs=1 00:11:17.825 00:11:17.825 [job0] 00:11:17.825 filename=/dev/sda 00:11:17.825 queue_depth set to 113 (sda) 00:11:17.825 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:11:17.825 fio-3.35 00:11:17.825 Starting 1 thread 00:11:17.825 [2024-07-25 10:11:50.949765] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:20.360 [2024-07-25 10:11:53.060768] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:20.360 00:11:20.360 job0: (groupid=0, jobs=1): err= 0: pid=70071: Thu Jul 25 10:11:53 2024 00:11:20.360 write: IOPS=13.1k, BW=6569KiB/s (6727kB/s)(12.8MiB/2001msec); 0 zone resets 00:11:20.360 slat (nsec): min=3705, max=66294, avg=5753.90, stdev=1477.15 00:11:20.360 clat (usec): min=53, max=336, avg=69.85, stdev= 8.40 00:11:20.360 lat (usec): min=58, max=402, avg=75.60, stdev= 9.26 00:11:20.360 clat percentiles (usec): 00:11:20.360 | 1.00th=[ 58], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 64], 00:11:20.360 | 30.00th=[ 66], 40.00th=[ 68], 50.00th=[ 70], 60.00th=[ 71], 00:11:20.360 | 70.00th=[ 73], 80.00th=[ 76], 90.00th=[ 79], 95.00th=[ 83], 00:11:20.360 | 99.00th=[ 97], 99.50th=[ 105], 99.90th=[ 133], 99.95th=[ 147], 00:11:20.360 | 99.99th=[ 253] 00:11:20.360 bw ( KiB/s): min= 5866, max= 6897, per=97.56%, avg=6409.33, stdev=517.75, samples=3 00:11:20.360 iops : min=11732, max=13794, avg=12818.67, stdev=1035.50, samples=3 00:11:20.360 lat (usec) : 100=99.21%, 250=0.78%, 500=0.01% 00:11:20.360 cpu : usr=3.20%, sys=10.65%, ctx=26303, majf=0, minf=1 00:11:20.360 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.360 issued rwts: total=0,26290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.360 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.360 00:11:20.360 Run status group 0 (all jobs): 00:11:20.360 WRITE: bw=6569KiB/s (6727kB/s), 6569KiB/s-6569KiB/s (6727kB/s-6727kB/s), io=12.8MiB (13.5MB), run=2001-2001msec 00:11:20.360 00:11:20.360 Disk stats (read/write): 00:11:20.360 sda: ios=48/24681, merge=0/0, ticks=8/1714, in_queue=1722, util=95.47% 00:11:20.360 10:11:53 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:11:20.361 [global] 00:11:20.361 thread=1 00:11:20.361 invalidate=1 00:11:20.361 rw=read 00:11:20.361 time_based=1 00:11:20.361 runtime=2 00:11:20.361 ioengine=libaio 00:11:20.361 direct=1 00:11:20.361 bs=512 00:11:20.361 iodepth=1 00:11:20.361 norandommap=1 00:11:20.361 numjobs=1 00:11:20.361 00:11:20.361 [job0] 00:11:20.361 filename=/dev/sda 00:11:20.361 queue_depth set to 113 (sda) 00:11:20.361 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:11:20.361 fio-3.35 00:11:20.361 Starting 1 thread 00:11:22.264 00:11:22.264 job0: (groupid=0, jobs=1): err= 0: pid=70125: Thu Jul 25 10:11:55 2024 00:11:22.264 read: IOPS=14.9k, BW=7461KiB/s (7640kB/s)(14.6MiB/2000msec) 00:11:22.264 slat (nsec): min=3446, max=64002, avg=5607.38, stdev=1473.92 00:11:22.264 clat (usec): min=46, max=2973, avg=61.00, stdev=27.57 00:11:22.264 lat (usec): min=55, max=2981, avg=66.61, stdev=27.78 00:11:22.264 clat percentiles (usec): 00:11:22.264 | 1.00th=[ 53], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 58], 00:11:22.264 | 30.00th=[ 59], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 61], 00:11:22.264 | 70.00th=[ 62], 80.00th=[ 64], 90.00th=[ 67], 95.00th=[ 70], 00:11:22.264 | 99.00th=[ 82], 99.50th=[ 89], 99.90th=[ 117], 99.95th=[ 302], 00:11:22.264 | 99.99th=[ 2278] 00:11:22.264 bw ( KiB/s): min= 7006, max= 7667, per=99.40%, avg=7416.00, stdev=358.04, samples=3 00:11:22.264 iops : min=14012, max=15334, avg=14832.00, stdev=716.08, samples=3 00:11:22.264 lat (usec) : 50=0.02%, 100=99.82%, 250=0.11%, 500=0.03%, 750=0.01% 00:11:22.264 lat (msec) : 2=0.01%, 4=0.01% 00:11:22.264 cpu : usr=3.20%, sys=12.25%, ctx=29843, majf=0, minf=1 00:11:22.264 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.264 issued rwts: total=29842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.264 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.264 00:11:22.264 Run status group 0 (all jobs): 00:11:22.264 READ: bw=7461KiB/s (7640kB/s), 7461KiB/s-7461KiB/s (7640kB/s-7640kB/s), io=14.6MiB (15.3MB), run=2000-2000msec 00:11:22.264 00:11:22.264 Disk stats (read/write): 00:11:22.264 sda: ios=28238/0, merge=0/0, ticks=1689/0, in_queue=1688, util=94.97% 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:11:22.264 Logging out of session [sid: 31, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:11:22.264 Logout of [sid: 31, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:22.264 iscsiadm: No active sessions. 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@31 -- # node_login_fio_logout 'HeaderDigest -v CRC32C,None' 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C,None 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:11:22.264 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:11:22.264 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:22.264 [2024-07-25 10:11:55.517188] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:11:22.264 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:11:22.265 10:11:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:11:22.522 [global] 00:11:22.522 thread=1 00:11:22.522 invalidate=1 00:11:22.522 rw=write 00:11:22.522 time_based=1 00:11:22.522 runtime=2 00:11:22.522 ioengine=libaio 00:11:22.522 direct=1 00:11:22.522 bs=512 00:11:22.522 iodepth=1 00:11:22.522 norandommap=1 00:11:22.522 numjobs=1 00:11:22.522 00:11:22.522 [job0] 00:11:22.522 filename=/dev/sda 00:11:22.522 queue_depth set to 113 (sda) 00:11:22.522 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:11:22.522 fio-3.35 00:11:22.522 Starting 1 thread 00:11:22.522 [2024-07-25 10:11:55.709662] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:25.089 [2024-07-25 10:11:57.821528] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:25.089 00:11:25.089 job0: (groupid=0, jobs=1): err= 0: pid=70196: Thu Jul 25 10:11:57 2024 00:11:25.089 write: IOPS=13.7k, BW=6828KiB/s (6992kB/s)(13.3MiB/2001msec); 0 zone resets 00:11:25.089 slat (usec): min=3, max=925, avg= 6.65, stdev= 6.72 00:11:25.089 clat (usec): min=21, max=3547, avg=66.04, stdev=49.26 00:11:25.089 lat (usec): min=58, max=3553, avg=72.69, stdev=49.49 00:11:25.089 clat percentiles (usec): 00:11:25.089 | 1.00th=[ 45], 5.00th=[ 50], 10.00th=[ 60], 20.00th=[ 62], 00:11:25.089 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 65], 60.00th=[ 67], 00:11:25.089 | 70.00th=[ 68], 80.00th=[ 70], 90.00th=[ 73], 95.00th=[ 76], 00:11:25.089 | 99.00th=[ 89], 99.50th=[ 94], 99.90th=[ 153], 99.95th=[ 578], 00:11:25.089 | 99.99th=[ 3163] 00:11:25.089 bw ( KiB/s): min= 6631, max= 6821, per=98.85%, avg=6750.67, stdev=104.16, samples=3 00:11:25.089 iops : min=13262, max=13642, avg=13501.33, stdev=208.33, samples=3 00:11:25.089 lat (usec) : 50=5.54%, 100=94.14%, 250=0.25%, 500=0.03%, 750=0.01% 00:11:25.089 lat (msec) : 2=0.03%, 4=0.02% 00:11:25.089 cpu : usr=3.55%, sys=11.15%, ctx=29429, majf=0, minf=1 00:11:25.089 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:25.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.089 issued rwts: total=0,27327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.089 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:25.089 00:11:25.089 Run status group 0 (all jobs): 00:11:25.089 WRITE: bw=6828KiB/s (6992kB/s), 6828KiB/s-6828KiB/s (6992kB/s-6992kB/s), io=13.3MiB (14.0MB), run=2001-2001msec 00:11:25.089 00:11:25.089 Disk stats (read/write): 00:11:25.089 sda: ios=48/25681, merge=0/0, ticks=7/1699, in_queue=1706, util=95.47% 00:11:25.089 10:11:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:11:25.089 [global] 00:11:25.089 thread=1 00:11:25.089 invalidate=1 00:11:25.089 rw=read 00:11:25.089 time_based=1 00:11:25.089 runtime=2 00:11:25.089 ioengine=libaio 00:11:25.089 direct=1 00:11:25.089 bs=512 00:11:25.089 iodepth=1 00:11:25.089 norandommap=1 00:11:25.089 numjobs=1 00:11:25.089 00:11:25.089 [job0] 00:11:25.089 filename=/dev/sda 00:11:25.089 queue_depth set to 113 (sda) 00:11:25.089 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:11:25.089 fio-3.35 00:11:25.089 Starting 1 thread 00:11:26.991 00:11:26.991 job0: (groupid=0, jobs=1): err= 0: pid=70249: Thu Jul 25 10:12:00 2024 00:11:26.991 read: IOPS=14.8k, BW=7414KiB/s (7592kB/s)(14.5MiB/2001msec) 00:11:26.991 slat (usec): min=3, max=121, avg= 5.40, stdev= 1.23 00:11:26.991 clat (usec): min=2, max=3828, avg=61.62, stdev=55.59 00:11:26.991 lat (usec): min=54, max=3836, avg=67.01, stdev=55.67 00:11:26.991 clat percentiles (usec): 00:11:26.991 | 1.00th=[ 53], 5.00th=[ 56], 10.00th=[ 56], 20.00th=[ 58], 00:11:26.991 | 30.00th=[ 59], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 61], 00:11:26.991 | 70.00th=[ 62], 80.00th=[ 63], 90.00th=[ 65], 95.00th=[ 69], 00:11:26.991 | 99.00th=[ 79], 99.50th=[ 84], 99.90th=[ 400], 99.95th=[ 1205], 00:11:26.991 | 99.99th=[ 3589] 00:11:26.991 bw ( KiB/s): min= 7303, max= 7560, per=100.00%, avg=7434.33, stdev=128.59, samples=3 00:11:26.991 iops : min=14606, max=15120, avg=14868.67, stdev=257.19, samples=3 00:11:26.991 lat (usec) : 4=0.01%, 50=0.01%, 100=99.77%, 250=0.09%, 500=0.05% 00:11:26.991 lat (usec) : 750=0.02%, 1000=0.01% 00:11:26.991 lat (msec) : 2=0.02%, 4=0.03% 00:11:26.991 cpu : usr=3.60%, sys=11.15%, ctx=29672, majf=0, minf=1 00:11:26.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.992 issued rwts: total=29670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.992 00:11:26.992 Run status group 0 (all jobs): 00:11:26.992 READ: bw=7414KiB/s (7592kB/s), 7414KiB/s-7414KiB/s (7592kB/s-7592kB/s), io=14.5MiB (15.2MB), run=2001-2001msec 00:11:26.992 00:11:26.992 Disk stats (read/write): 00:11:26.992 sda: ios=28028/0, merge=0/0, ticks=1667/0, in_queue=1668, util=93.92% 00:11:26.992 10:12:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:11:26.992 Logging out of session [sid: 32, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:11:26.992 Logout of [sid: 32, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:11:26.992 10:12:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:11:26.992 10:12:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:11:26.992 10:12:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:26.992 10:12:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:26.992 10:12:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:26.992 10:12:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:26.992 iscsiadm: No active sessions. 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:11:27.250 00:11:27.250 real 0m9.548s 00:11:27.250 user 0m0.802s 00:11:27.250 sys 0m1.224s 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@10 -- # set +x 00:11:27.250 ************************************ 00:11:27.250 END TEST iscsi_tgt_digest 00:11:27.250 ************************************ 00:11:27.250 Cleaning up iSCSI connection 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1142 -- # return 0 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@92 -- # iscsicleanup 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:11:27.250 iscsiadm: No matching sessions found 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@981 -- # true 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@983 -- # rm -rf 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@93 -- # killprocess 69974 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@948 -- # '[' -z 69974 ']' 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@952 -- # kill -0 69974 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@953 -- # uname 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69974 00:11:27.250 killing process with pid 69974 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69974' 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@967 -- # kill 69974 00:11:27.250 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@972 -- # wait 69974 00:11:27.508 10:12:00 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@94 -- # iscsitestfini 00:11:27.508 10:12:00 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:27.508 ************************************ 00:11:27.508 END TEST iscsi_tgt_digests 00:11:27.508 ************************************ 00:11:27.508 00:11:27.508 real 0m12.389s 00:11:27.508 user 0m45.700s 00:11:27.508 sys 0m3.637s 00:11:27.508 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:27.509 10:12:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:27.509 10:12:00 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:11:27.509 10:12:00 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@43 -- # run_test iscsi_tgt_fuzz /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:11:27.509 10:12:00 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:27.509 10:12:00 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.509 10:12:00 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:27.509 ************************************ 00:11:27.509 START TEST iscsi_tgt_fuzz 00:11:27.509 ************************************ 00:11:27.509 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:11:27.767 * Looking for test storage... 00:11:27.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/fuzz 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@11 -- # iscsitestinit 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@13 -- # '[' -z 10.0.0.1 ']' 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@18 -- # '[' -z 10.0.0.2 ']' 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@23 -- # timing_enter iscsi_fuzz_test 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@25 -- # MALLOC_BDEV_SIZE=64 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@26 -- # MALLOC_BLOCK_SIZE=4096 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@28 -- # TEST_TIMEOUT=1200 00:11:27.767 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@31 -- # for i in "$@" 00:11:27.768 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@32 -- # case "$i" in 00:11:27.768 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@34 -- # TEST_TIMEOUT=30 00:11:27.768 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@39 -- # timing_enter start_iscsi_tgt 00:11:27.768 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:27.768 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:27.768 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@42 -- # iscsipid=70351 00:11:27.768 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --disable-cpumask-locks --wait-for-rpc 00:11:27.768 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@43 -- # echo 'Process iscsipid: 70351' 00:11:27.768 Process iscsipid: 70351 00:11:27.768 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@45 -- # trap 'killprocess $iscsipid; exit 1' SIGINT SIGTERM EXIT 00:11:27.768 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@47 -- # waitforlisten 70351 00:11:27.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.768 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@829 -- # '[' -z 70351 ']' 00:11:27.768 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.768 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.768 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.768 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.768 10:12:00 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@862 -- # return 0 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@49 -- # rpc_cmd iscsi_set_options -o 60 -a 16 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@50 -- # rpc_cmd framework_start_init 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:28.701 iscsi_tgt is listening. Running tests... 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@51 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@52 -- # timing_exit start_iscsi_tgt 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@54 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@55 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.701 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:28.959 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.959 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@56 -- # rpc_cmd bdev_malloc_create 64 4096 00:11:28.959 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.959 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:28.959 Malloc0 00:11:28.959 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.959 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@57 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:11:28.959 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.959 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:28.959 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.959 10:12:01 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@58 -- # sleep 1 00:11:29.893 10:12:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@60 -- # trap 'killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:29.893 10:12:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/iscsi_fuzz/iscsi_fuzz -m 0xF0 -T 10.0.0.1 -t 30 00:12:01.958 Fuzzing completed. Shutting down the fuzz application. 00:12:01.958 00:12:01.958 device 0x13a2d40 stats: Sent 13013 valid opcode PDUs, 118317 invalid opcode PDUs. 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@64 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:disk1 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@67 -- # rpc_cmd bdev_malloc_delete Malloc0 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@71 -- # killprocess 70351 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@948 -- # '[' -z 70351 ']' 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@952 -- # kill -0 70351 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@953 -- # uname 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70351 00:12:01.958 killing process with pid 70351 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70351' 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@967 -- # kill 70351 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@972 -- # wait 70351 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@73 -- # iscsitestfini 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@75 -- # timing_exit iscsi_fuzz_test 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:01.958 ************************************ 00:12:01.958 END TEST iscsi_tgt_fuzz 00:12:01.958 ************************************ 00:12:01.958 00:12:01.958 real 0m33.109s 00:12:01.958 user 3m9.120s 00:12:01.958 sys 0m16.185s 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:01.958 10:12:33 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:12:01.958 10:12:33 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@44 -- # run_test iscsi_tgt_multiconnection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:12:01.958 10:12:33 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:01.958 10:12:33 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.958 10:12:33 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:01.958 ************************************ 00:12:01.958 START TEST iscsi_tgt_multiconnection 00:12:01.958 ************************************ 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:12:01.958 * Looking for test storage... 00:12:01.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@11 -- # iscsitestinit 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@18 -- # CONNECTION_NUMBER=30 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@40 -- # timing_enter start_iscsi_tgt 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@42 -- # iscsipid=70787 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:12:01.958 iSCSI target launched. pid: 70787 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@43 -- # echo 'iSCSI target launched. pid: 70787' 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@44 -- # trap 'remove_backends; iscsicleanup; killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@46 -- # waitforlisten 70787 00:12:01.958 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 70787 ']' 00:12:01.959 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.959 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.959 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.959 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.959 10:12:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:12:01.959 [2024-07-25 10:12:34.090056] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:12:01.959 [2024-07-25 10:12:34.090543] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70787 ] 00:12:01.959 [2024-07-25 10:12:34.240336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.959 [2024-07-25 10:12:34.370788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.959 10:12:34 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:01.959 10:12:34 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:12:01.959 10:12:34 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 128 00:12:01.959 10:12:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:02.525 10:12:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:02.525 10:12:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:12:02.783 10:12:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@50 -- # timing_exit start_iscsi_tgt 00:12:02.783 10:12:35 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:02.783 10:12:35 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:12:02.783 10:12:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:12:03.042 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:12:03.300 Creating an iSCSI target node. 00:12:03.300 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@55 -- # echo 'Creating an iSCSI target node.' 00:12:03.300 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs0 -c 1048576 00:12:03.558 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # ls_guid=8b779baf-a98f-4859-b7f0-05f567781373 00:12:03.558 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@59 -- # get_lvs_free_mb 8b779baf-a98f-4859-b7f0-05f567781373 00:12:03.558 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1364 -- # local lvs_uuid=8b779baf-a98f-4859-b7f0-05f567781373 00:12:03.558 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1365 -- # local lvs_info 00:12:03.558 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1366 -- # local fc 00:12:03.558 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1367 -- # local cs 00:12:03.558 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:12:03.816 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:12:03.816 { 00:12:03.816 "uuid": "8b779baf-a98f-4859-b7f0-05f567781373", 00:12:03.816 "name": "lvs0", 00:12:03.816 "base_bdev": "Nvme0n1", 00:12:03.816 "total_data_clusters": 5099, 00:12:03.816 "free_clusters": 5099, 00:12:03.816 "block_size": 4096, 00:12:03.816 "cluster_size": 1048576 00:12:03.816 } 00:12:03.816 ]' 00:12:03.816 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="8b779baf-a98f-4859-b7f0-05f567781373") .free_clusters' 00:12:03.816 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1369 -- # fc=5099 00:12:03.816 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="8b779baf-a98f-4859-b7f0-05f567781373") .cluster_size' 00:12:03.816 5099 00:12:03.816 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1370 -- # cs=1048576 00:12:03.816 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1373 -- # free_mb=5099 00:12:03.816 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1374 -- # echo 5099 00:12:03.816 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@60 -- # lvol_bdev_size=169 00:12:03.816 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # seq 1 30 00:12:03.816 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:03.816 10:12:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_1 169 00:12:04.074 36ded4b7-b206-4ea3-b715-5b58a4001257 00:12:04.074 10:12:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:04.074 10:12:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_2 169 00:12:04.334 b3a75bb9-7785-4175-93b9-94047ab2f6db 00:12:04.334 10:12:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:04.334 10:12:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_3 169 00:12:04.334 181679de-c53c-4c41-b2f3-b55949fd038c 00:12:04.334 10:12:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:04.334 10:12:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_4 169 00:12:04.593 1b99800c-75fc-4600-9e8f-0a7050598386 00:12:04.593 10:12:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:04.593 10:12:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_5 169 00:12:04.899 ebce15a2-7adf-4d47-97e0-296f89aa2149 00:12:04.899 10:12:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:04.899 10:12:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_6 169 00:12:04.899 3ed1ecfc-8fea-4fd9-a44b-dba0a39b4b84 00:12:04.899 10:12:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:04.899 10:12:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_7 169 00:12:05.156 b09ecda2-a795-4f67-b323-29f5bac50eb6 00:12:05.156 10:12:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:05.156 10:12:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_8 169 00:12:05.415 d5e8a5dc-8cb2-4541-9860-545bde0b1856 00:12:05.415 10:12:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:05.415 10:12:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_9 169 00:12:05.673 722768af-8822-46d5-8663-bbe434be7169 00:12:05.673 10:12:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:05.673 10:12:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_10 169 00:12:05.932 4174ccb5-d08d-473a-b888-4e7fda8edadb 00:12:05.932 10:12:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:05.932 10:12:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_11 169 00:12:05.932 0ce0a2f5-ae9a-403f-b404-f406e0312aa4 00:12:05.932 10:12:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:05.932 10:12:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_12 169 00:12:06.191 0ef9acba-c909-4bea-a453-e03a180d57ca 00:12:06.191 10:12:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:06.191 10:12:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_13 169 00:12:06.450 6d224d8f-bd04-4cb9-8eaf-66c68b832f94 00:12:06.450 10:12:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:06.450 10:12:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_14 169 00:12:06.709 345c5592-c558-4d58-a4d9-fe6e0c974397 00:12:06.709 10:12:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:06.709 10:12:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_15 169 00:12:06.967 71363de9-2f48-431e-875c-51c216f2bd7e 00:12:06.967 10:12:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:06.967 10:12:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_16 169 00:12:07.226 cb001cbd-7806-42a6-a9ce-ec8814f42991 00:12:07.226 10:12:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:07.226 10:12:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_17 169 00:12:07.484 b5be91b7-92d2-49b1-ab08-1cca276f4e16 00:12:07.484 10:12:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:07.484 10:12:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_18 169 00:12:07.796 697b28c8-775b-43fd-996d-d07671ccee3e 00:12:07.796 10:12:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:07.796 10:12:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_19 169 00:12:08.072 d2becb7d-4f33-45d2-b46d-a86f15c0e186 00:12:08.072 10:12:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:08.072 10:12:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_20 169 00:12:08.072 2d3d7636-fa4e-4da2-8386-4f1937164e47 00:12:08.072 10:12:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:08.072 10:12:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_21 169 00:12:08.330 7b286e2f-60b9-400c-9268-a5dca2381761 00:12:08.330 10:12:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:08.330 10:12:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_22 169 00:12:08.588 0d4f76c9-2310-421c-9591-3ba47edf4fef 00:12:08.588 10:12:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:08.588 10:12:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_23 169 00:12:08.845 5e1cb099-3e6f-4f61-84b1-8df4ba4758d8 00:12:08.845 10:12:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:08.845 10:12:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_24 169 00:12:09.103 ae212346-85f1-4dfb-8af9-fa09ae75de66 00:12:09.103 10:12:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:09.103 10:12:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_25 169 00:12:09.361 03c7d872-3211-4b96-948b-e63af869f020 00:12:09.361 10:12:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:09.361 10:12:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_26 169 00:12:09.619 7b2a1e94-7768-43e0-95d5-c0f59f12b91a 00:12:09.619 10:12:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:09.619 10:12:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_27 169 00:12:09.619 4de600fc-165d-4d53-b125-ff1856d7576b 00:12:09.619 10:12:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:09.619 10:12:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_28 169 00:12:09.877 41e4bb56-e027-4ec2-af5a-a295c7756c44 00:12:09.877 10:12:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:09.877 10:12:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_29 169 00:12:10.135 d40e3156-fb50-4dc8-9c0a-400833f664d0 00:12:10.135 10:12:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:10.135 10:12:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b779baf-a98f-4859-b7f0-05f567781373 lbd_30 169 00:12:10.393 5a97a231-2cd3-476e-a14e-9622706994a8 00:12:10.393 10:12:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # seq 1 30 00:12:10.393 10:12:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:10.393 10:12:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_1:0 00:12:10.393 10:12:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias lvs0/lbd_1:0 1:2 256 -d 00:12:10.652 10:12:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:10.652 10:12:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_2:0 00:12:10.652 10:12:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias lvs0/lbd_2:0 1:2 256 -d 00:12:10.911 10:12:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:10.911 10:12:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_3:0 00:12:10.911 10:12:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias lvs0/lbd_3:0 1:2 256 -d 00:12:10.911 10:12:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:10.911 10:12:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_4:0 00:12:10.911 10:12:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias lvs0/lbd_4:0 1:2 256 -d 00:12:11.169 10:12:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:11.169 10:12:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_5:0 00:12:11.169 10:12:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias lvs0/lbd_5:0 1:2 256 -d 00:12:11.427 10:12:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:11.427 10:12:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_6:0 00:12:11.427 10:12:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias lvs0/lbd_6:0 1:2 256 -d 00:12:11.790 10:12:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:11.790 10:12:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_7:0 00:12:11.790 10:12:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias lvs0/lbd_7:0 1:2 256 -d 00:12:11.790 10:12:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:11.790 10:12:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_8:0 00:12:11.790 10:12:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias lvs0/lbd_8:0 1:2 256 -d 00:12:12.049 10:12:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:12.049 10:12:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_9:0 00:12:12.049 10:12:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias lvs0/lbd_9:0 1:2 256 -d 00:12:12.049 10:12:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:12.049 10:12:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_10:0 00:12:12.049 10:12:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias lvs0/lbd_10:0 1:2 256 -d 00:12:12.307 10:12:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:12.307 10:12:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_11:0 00:12:12.307 10:12:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target11 Target11_alias lvs0/lbd_11:0 1:2 256 -d 00:12:12.565 10:12:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:12.565 10:12:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_12:0 00:12:12.565 10:12:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target12 Target12_alias lvs0/lbd_12:0 1:2 256 -d 00:12:12.824 10:12:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:12.824 10:12:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_13:0 00:12:12.824 10:12:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target13 Target13_alias lvs0/lbd_13:0 1:2 256 -d 00:12:13.083 10:12:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:13.083 10:12:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_14:0 00:12:13.083 10:12:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target14 Target14_alias lvs0/lbd_14:0 1:2 256 -d 00:12:13.341 10:12:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:13.341 10:12:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_15:0 00:12:13.341 10:12:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target15 Target15_alias lvs0/lbd_15:0 1:2 256 -d 00:12:13.600 10:12:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:13.600 10:12:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_16:0 00:12:13.600 10:12:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target16 Target16_alias lvs0/lbd_16:0 1:2 256 -d 00:12:13.858 10:12:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:13.858 10:12:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_17:0 00:12:13.858 10:12:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target17 Target17_alias lvs0/lbd_17:0 1:2 256 -d 00:12:14.117 10:12:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:14.117 10:12:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_18:0 00:12:14.117 10:12:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target18 Target18_alias lvs0/lbd_18:0 1:2 256 -d 00:12:14.117 10:12:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:14.117 10:12:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_19:0 00:12:14.117 10:12:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target19 Target19_alias lvs0/lbd_19:0 1:2 256 -d 00:12:14.376 10:12:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:14.376 10:12:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_20:0 00:12:14.376 10:12:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target20 Target20_alias lvs0/lbd_20:0 1:2 256 -d 00:12:14.636 10:12:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:14.636 10:12:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_21:0 00:12:14.636 10:12:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target21 Target21_alias lvs0/lbd_21:0 1:2 256 -d 00:12:14.895 10:12:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:14.895 10:12:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_22:0 00:12:14.895 10:12:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target22 Target22_alias lvs0/lbd_22:0 1:2 256 -d 00:12:15.154 10:12:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:15.154 10:12:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_23:0 00:12:15.154 10:12:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target23 Target23_alias lvs0/lbd_23:0 1:2 256 -d 00:12:15.412 10:12:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:15.412 10:12:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_24:0 00:12:15.412 10:12:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target24 Target24_alias lvs0/lbd_24:0 1:2 256 -d 00:12:15.671 10:12:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:15.671 10:12:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_25:0 00:12:15.671 10:12:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target25 Target25_alias lvs0/lbd_25:0 1:2 256 -d 00:12:15.929 10:12:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:15.929 10:12:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_26:0 00:12:15.929 10:12:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target26 Target26_alias lvs0/lbd_26:0 1:2 256 -d 00:12:16.188 10:12:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:16.188 10:12:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_27:0 00:12:16.188 10:12:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target27 Target27_alias lvs0/lbd_27:0 1:2 256 -d 00:12:16.188 10:12:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:16.188 10:12:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_28:0 00:12:16.188 10:12:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target28 Target28_alias lvs0/lbd_28:0 1:2 256 -d 00:12:16.446 10:12:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:16.446 10:12:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_29:0 00:12:16.446 10:12:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target29 Target29_alias lvs0/lbd_29:0 1:2 256 -d 00:12:16.704 10:12:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:16.704 10:12:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_30:0 00:12:16.704 10:12:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target30 Target30_alias lvs0/lbd_30:0 1:2 256 -d 00:12:16.963 10:12:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@69 -- # sleep 1 00:12:17.897 10:12:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@71 -- # echo 'Logging into iSCSI target.' 00:12:17.897 Logging into iSCSI target. 00:12:17.897 10:12:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@72 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target16 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target17 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target18 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target19 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target20 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target21 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target22 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target23 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target24 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target25 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target26 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target27 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target28 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target29 00:12:17.897 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target30 00:12:17.897 10:12:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@73 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:12:18.156 [2024-07-25 10:12:51.176681] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.156 [2024-07-25 10:12:51.210234] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.156 [2024-07-25 10:12:51.243250] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.156 [2024-07-25 10:12:51.246382] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.156 [2024-07-25 10:12:51.279902] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.156 [2024-07-25 10:12:51.317743] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.156 [2024-07-25 10:12:51.348543] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.156 [2024-07-25 10:12:51.370806] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.156 [2024-07-25 10:12:51.408775] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.415 [2024-07-25 10:12:51.429186] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.415 [2024-07-25 10:12:51.453241] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.415 [2024-07-25 10:12:51.487873] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.415 [2024-07-25 10:12:51.506165] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:12:18.415 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:12:18.415 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:12:18.415 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:12:18.415 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:18.415 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:12:18.415 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:12:18.415 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:12:18.415 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:12:18.415 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:12:18.415 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:12:18.415 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:12:18.415 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:12:18.415 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:12:18.415 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:12:18.415 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, por[2024-07-25 10:12:51.541255] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.415 [2024-07-25 10:12:51.564691] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.415 [2024-07-25 10:12:51.606993] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.415 [2024-07-25 10:12:51.622877] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.415 [2024-07-25 10:12:51.670449] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.674 [2024-07-25 10:12:51.702345] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.674 [2024-07-25 10:12:51.739832] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.674 [2024-07-25 10:12:51.765060] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.674 [2024-07-25 10:12:51.805519] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.674 [2024-07-25 10:12:51.856980] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.674 [2024-07-25 10:12:51.890036] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.932 [2024-07-25 10:12:51.942675] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.932 [2024-07-25 10:12:51.970318] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.932 [2024-07-25 10:12:52.036985] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.932 [2024-07-25 10:12:52.076597] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.932 [2024-07-25 10:12:52.107970] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.932 tal: 10.0.0.1,3260] successful. 00:12:18.932 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:12:18.932 Login to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:12:18.932 Login to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:12:18.932 Login to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:12:18.932 Login to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:12:18.932 Login to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:12:18.932 Login to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:12:18.932 Login to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:12:18.932 Login to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:12:18.932 Login to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:12:18.932 Login to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:12:18.932 Login to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:12:18.932 Login to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:12:18.932 Login to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:12:18.932 Login to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:12:18.932 Login to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:12:18.932 10:12:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@74 -- # waitforiscsidevices 30 00:12:18.932 10:12:52 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@116 -- # local num=30 00:12:18.932 10:12:52 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:12:18.932 10:12:52 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:12:18.932 10:12:52 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:12:18.932 10:12:52 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:12:18.932 [2024-07-25 10:12:52.130784] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:18.932 Running FIO 00:12:18.932 10:12:52 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # n=30 00:12:18.932 10:12:52 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@120 -- # '[' 30 -ne 30 ']' 00:12:18.932 10:12:52 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@123 -- # return 0 00:12:18.932 10:12:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@76 -- # echo 'Running FIO' 00:12:18.932 10:12:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 64 -t randrw -r 5 00:12:19.190 [global] 00:12:19.190 thread=1 00:12:19.190 invalidate=1 00:12:19.190 rw=randrw 00:12:19.190 time_based=1 00:12:19.190 runtime=5 00:12:19.190 ioengine=libaio 00:12:19.190 direct=1 00:12:19.190 bs=131072 00:12:19.190 iodepth=64 00:12:19.190 norandommap=1 00:12:19.190 numjobs=1 00:12:19.190 00:12:19.190 [job0] 00:12:19.190 filename=/dev/sda 00:12:19.190 [job1] 00:12:19.190 filename=/dev/sdb 00:12:19.190 [job2] 00:12:19.190 filename=/dev/sdc 00:12:19.190 [job3] 00:12:19.190 filename=/dev/sdd 00:12:19.190 [job4] 00:12:19.190 filename=/dev/sde 00:12:19.190 [job5] 00:12:19.190 filename=/dev/sdf 00:12:19.190 [job6] 00:12:19.190 filename=/dev/sdg 00:12:19.190 [job7] 00:12:19.190 filename=/dev/sdh 00:12:19.190 [job8] 00:12:19.190 filename=/dev/sdi 00:12:19.190 [job9] 00:12:19.190 filename=/dev/sdj 00:12:19.190 [job10] 00:12:19.190 filename=/dev/sdk 00:12:19.190 [job11] 00:12:19.190 filename=/dev/sdl 00:12:19.190 [job12] 00:12:19.190 filename=/dev/sdm 00:12:19.190 [job13] 00:12:19.190 filename=/dev/sdn 00:12:19.190 [job14] 00:12:19.190 filename=/dev/sdo 00:12:19.190 [job15] 00:12:19.190 filename=/dev/sdp 00:12:19.190 [job16] 00:12:19.190 filename=/dev/sdq 00:12:19.190 [job17] 00:12:19.190 filename=/dev/sdr 00:12:19.190 [job18] 00:12:19.190 filename=/dev/sds 00:12:19.190 [job19] 00:12:19.190 filename=/dev/sdt 00:12:19.190 [job20] 00:12:19.190 filename=/dev/sdu 00:12:19.190 [job21] 00:12:19.190 filename=/dev/sdv 00:12:19.190 [job22] 00:12:19.190 filename=/dev/sdw 00:12:19.190 [job23] 00:12:19.190 filename=/dev/sdx 00:12:19.190 [job24] 00:12:19.190 filename=/dev/sdy 00:12:19.190 [job25] 00:12:19.190 filename=/dev/sdz 00:12:19.190 [job26] 00:12:19.190 filename=/dev/sdaa 00:12:19.190 [job27] 00:12:19.190 filename=/dev/sdab 00:12:19.190 [job28] 00:12:19.190 filename=/dev/sdac 00:12:19.190 [job29] 00:12:19.190 filename=/dev/sdad 00:12:19.757 queue_depth set to 113 (sda) 00:12:19.757 queue_depth set to 113 (sdb) 00:12:19.757 queue_depth set to 113 (sdc) 00:12:19.757 queue_depth set to 113 (sdd) 00:12:19.757 queue_depth set to 113 (sde) 00:12:19.757 queue_depth set to 113 (sdf) 00:12:19.757 queue_depth set to 113 (sdg) 00:12:19.757 queue_depth set to 113 (sdh) 00:12:19.757 queue_depth set to 113 (sdi) 00:12:19.757 queue_depth set to 113 (sdj) 00:12:19.757 queue_depth set to 113 (sdk) 00:12:20.017 queue_depth set to 113 (sdl) 00:12:20.017 queue_depth set to 113 (sdm) 00:12:20.017 queue_depth set to 113 (sdn) 00:12:20.017 queue_depth set to 113 (sdo) 00:12:20.017 queue_depth set to 113 (sdp) 00:12:20.017 queue_depth set to 113 (sdq) 00:12:20.017 queue_depth set to 113 (sdr) 00:12:20.017 queue_depth set to 113 (sds) 00:12:20.017 queue_depth set to 113 (sdt) 00:12:20.017 queue_depth set to 113 (sdu) 00:12:20.017 queue_depth set to 113 (sdv) 00:12:20.282 queue_depth set to 113 (sdw) 00:12:20.282 queue_depth set to 113 (sdx) 00:12:20.282 queue_depth set to 113 (sdy) 00:12:20.282 queue_depth set to 113 (sdz) 00:12:20.282 queue_depth set to 113 (sdaa) 00:12:20.282 queue_depth set to 113 (sdab) 00:12:20.282 queue_depth set to 113 (sdac) 00:12:20.282 queue_depth set to 113 (sdad) 00:12:20.541 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job16: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job17: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job18: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job19: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job20: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job21: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job22: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job23: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job24: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job25: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job26: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job27: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job28: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 job29: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:20.541 fio-3.35 00:12:20.541 Starting 30 threads 00:12:20.541 [2024-07-25 10:12:53.607597] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.541 [2024-07-25 10:12:53.611707] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.541 [2024-07-25 10:12:53.614191] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.616596] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.618896] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.621156] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.623427] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.625610] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.628038] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.630297] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.632555] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.634709] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.636833] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.638952] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.641102] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.643151] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.645428] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.647553] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.649735] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.651890] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.653978] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.656247] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.658467] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.660564] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.662894] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.665053] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.669378] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.673497] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.675901] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.542 [2024-07-25 10:12:53.678390] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.685755] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.699152] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.702634] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.706800] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.710469] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.713276] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.716223] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.719372] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.722707] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.725570] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.728915] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.731996] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.734814] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.737028] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.740335] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.743125] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.746636] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 [2024-07-25 10:12:59.751588] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.106 00:12:27.106 job0: (groupid=0, jobs=1): err= 0: pid=71715: Thu Jul 25 10:12:59 2024 00:12:27.106 read: IOPS=83, BW=10.4MiB/s (10.9MB/s)(55.6MiB/5350msec) 00:12:27.106 slat (usec): min=7, max=302, avg=25.47, stdev=18.96 00:12:27.106 clat (msec): min=31, max=369, avg=54.53, stdev=35.17 00:12:27.106 lat (msec): min=31, max=370, avg=54.55, stdev=35.17 00:12:27.106 clat percentiles (msec): 00:12:27.107 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 43], 00:12:27.107 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:12:27.107 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 77], 95.00th=[ 125], 00:12:27.107 | 99.00th=[ 169], 99.50th=[ 351], 99.90th=[ 372], 99.95th=[ 372], 00:12:27.107 | 99.99th=[ 372] 00:12:27.107 bw ( KiB/s): min= 7168, max=18981, per=3.34%, avg=11309.30, stdev=3293.71, samples=10 00:12:27.107 iops : min= 56, max= 148, avg=88.10, stdev=25.71, samples=10 00:12:27.107 write: IOPS=89, BW=11.2MiB/s (11.8MB/s)(60.1MiB/5350msec); 0 zone resets 00:12:27.107 slat (usec): min=14, max=131, avg=31.12, stdev=15.41 00:12:27.107 clat (msec): min=173, max=959, avg=660.44, stdev=99.64 00:12:27.107 lat (msec): min=173, max=959, avg=660.47, stdev=99.64 00:12:27.107 clat percentiles (msec): 00:12:27.107 | 1.00th=[ 279], 5.00th=[ 443], 10.00th=[ 584], 20.00th=[ 642], 00:12:27.107 | 30.00th=[ 659], 40.00th=[ 659], 50.00th=[ 676], 60.00th=[ 684], 00:12:27.107 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 726], 95.00th=[ 760], 00:12:27.107 | 99.00th=[ 944], 99.50th=[ 953], 99.90th=[ 961], 99.95th=[ 961], 00:12:27.107 | 99.99th=[ 961] 00:12:27.107 bw ( KiB/s): min= 5130, max=12032, per=3.15%, avg=10743.90, stdev=2038.45, samples=10 00:12:27.107 iops : min= 40, max= 94, avg=83.70, stdev=15.91, samples=10 00:12:27.107 lat (msec) : 50=38.88%, 100=5.51%, 250=3.78%, 500=3.46%, 750=45.57% 00:12:27.107 lat (msec) : 1000=2.81% 00:12:27.107 cpu : usr=0.21%, sys=0.49%, ctx=584, majf=0, minf=1 00:12:27.107 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:12:27.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.107 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.107 issued rwts: total=445,481,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.107 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.107 job1: (groupid=0, jobs=1): err= 0: pid=71716: Thu Jul 25 10:12:59 2024 00:12:27.107 read: IOPS=78, BW=9989KiB/s (10.2MB/s)(52.6MiB/5395msec) 00:12:27.107 slat (usec): min=7, max=1058, avg=36.70, stdev=70.48 00:12:27.107 clat (msec): min=5, max=405, avg=52.74, stdev=35.53 00:12:27.107 lat (msec): min=5, max=406, avg=52.78, stdev=35.53 00:12:27.107 clat percentiles (msec): 00:12:27.107 | 1.00th=[ 17], 5.00th=[ 35], 10.00th=[ 42], 20.00th=[ 43], 00:12:27.107 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:12:27.107 | 70.00th=[ 47], 80.00th=[ 49], 90.00th=[ 58], 95.00th=[ 106], 00:12:27.107 | 99.00th=[ 213], 99.50th=[ 222], 99.90th=[ 405], 99.95th=[ 405], 00:12:27.107 | 99.99th=[ 405] 00:12:27.107 bw ( KiB/s): min= 6925, max=16640, per=3.17%, avg=10753.30, stdev=2884.32, samples=10 00:12:27.107 iops : min= 54, max= 130, avg=84.00, stdev=22.55, samples=10 00:12:27.107 write: IOPS=88, BW=11.1MiB/s (11.7MB/s)(60.0MiB/5395msec); 0 zone resets 00:12:27.107 slat (usec): min=13, max=351, avg=41.44, stdev=34.53 00:12:27.107 clat (msec): min=88, max=1045, avg=672.08, stdev=108.62 00:12:27.107 lat (msec): min=88, max=1045, avg=672.12, stdev=108.63 00:12:27.107 clat percentiles (msec): 00:12:27.107 | 1.00th=[ 251], 5.00th=[ 456], 10.00th=[ 584], 20.00th=[ 642], 00:12:27.107 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 676], 60.00th=[ 684], 00:12:27.107 | 70.00th=[ 709], 80.00th=[ 726], 90.00th=[ 751], 95.00th=[ 793], 00:12:27.107 | 99.00th=[ 1011], 99.50th=[ 1028], 99.90th=[ 1045], 99.95th=[ 1045], 00:12:27.107 | 99.99th=[ 1045] 00:12:27.107 bw ( KiB/s): min= 4608, max=12032, per=3.13%, avg=10677.30, stdev=2202.45, samples=10 00:12:27.107 iops : min= 36, max= 94, avg=83.40, stdev=17.21, samples=10 00:12:27.107 lat (msec) : 10=0.22%, 20=0.55%, 50=38.51%, 100=4.66%, 250=3.22% 00:12:27.107 lat (msec) : 500=3.22%, 750=44.17%, 1000=4.66%, 2000=0.78% 00:12:27.107 cpu : usr=0.26%, sys=0.48%, ctx=590, majf=0, minf=1 00:12:27.107 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=93.0% 00:12:27.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.107 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.107 issued rwts: total=421,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.107 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.107 job2: (groupid=0, jobs=1): err= 0: pid=71754: Thu Jul 25 10:12:59 2024 00:12:27.107 read: IOPS=91, BW=11.4MiB/s (11.9MB/s)(60.9MiB/5347msec) 00:12:27.107 slat (usec): min=6, max=1006, avg=37.61, stdev=72.62 00:12:27.107 clat (msec): min=29, max=369, avg=53.79, stdev=33.86 00:12:27.107 lat (msec): min=29, max=369, avg=53.83, stdev=33.86 00:12:27.107 clat percentiles (msec): 00:12:27.107 | 1.00th=[ 32], 5.00th=[ 38], 10.00th=[ 42], 20.00th=[ 43], 00:12:27.107 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:12:27.107 | 70.00th=[ 47], 80.00th=[ 49], 90.00th=[ 68], 95.00th=[ 123], 00:12:27.107 | 99.00th=[ 199], 99.50th=[ 207], 99.90th=[ 372], 99.95th=[ 372], 00:12:27.107 | 99.99th=[ 372] 00:12:27.107 bw ( KiB/s): min= 8977, max=15298, per=3.66%, avg=12410.40, stdev=2215.17, samples=10 00:12:27.107 iops : min= 70, max= 119, avg=96.70, stdev=17.17, samples=10 00:12:27.107 write: IOPS=89, BW=11.2MiB/s (11.8MB/s)(60.0MiB/5347msec); 0 zone resets 00:12:27.107 slat (usec): min=8, max=612, avg=44.00, stdev=57.50 00:12:27.107 clat (msec): min=186, max=1017, avg=657.42, stdev=100.33 00:12:27.107 lat (msec): min=186, max=1017, avg=657.47, stdev=100.34 00:12:27.107 clat percentiles (msec): 00:12:27.107 | 1.00th=[ 292], 5.00th=[ 435], 10.00th=[ 575], 20.00th=[ 634], 00:12:27.107 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 676], 00:12:27.107 | 70.00th=[ 684], 80.00th=[ 701], 90.00th=[ 726], 95.00th=[ 768], 00:12:27.107 | 99.00th=[ 953], 99.50th=[ 1003], 99.90th=[ 1020], 99.95th=[ 1020], 00:12:27.107 | 99.99th=[ 1020] 00:12:27.107 bw ( KiB/s): min= 4844, max=12032, per=3.15%, avg=10726.80, stdev=2130.94, samples=10 00:12:27.107 iops : min= 37, max= 94, avg=83.50, stdev=16.87, samples=10 00:12:27.107 lat (msec) : 50=42.71%, 100=4.14%, 250=3.62%, 500=3.52%, 750=43.02% 00:12:27.107 lat (msec) : 1000=2.69%, 2000=0.31% 00:12:27.107 cpu : usr=0.32%, sys=0.43%, ctx=646, majf=0, minf=1 00:12:27.107 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.5% 00:12:27.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.107 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.107 issued rwts: total=487,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.107 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.107 job3: (groupid=0, jobs=1): err= 0: pid=71763: Thu Jul 25 10:12:59 2024 00:12:27.107 read: IOPS=83, BW=10.4MiB/s (10.9MB/s)(56.2MiB/5388msec) 00:12:27.107 slat (usec): min=9, max=208, avg=26.07, stdev=16.20 00:12:27.107 clat (msec): min=2, max=412, avg=52.52, stdev=40.33 00:12:27.107 lat (msec): min=2, max=412, avg=52.54, stdev=40.33 00:12:27.107 clat percentiles (msec): 00:12:27.107 | 1.00th=[ 4], 5.00th=[ 20], 10.00th=[ 35], 20.00th=[ 43], 00:12:27.107 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:12:27.107 | 70.00th=[ 47], 80.00th=[ 49], 90.00th=[ 56], 95.00th=[ 112], 00:12:27.107 | 99.00th=[ 207], 99.50th=[ 213], 99.90th=[ 414], 99.95th=[ 414], 00:12:27.107 | 99.99th=[ 414] 00:12:27.107 bw ( KiB/s): min= 6912, max=18139, per=3.38%, avg=11462.60, stdev=3271.09, samples=10 00:12:27.107 iops : min= 54, max= 141, avg=89.40, stdev=25.37, samples=10 00:12:27.107 write: IOPS=89, BW=11.2MiB/s (11.7MB/s)(60.2MiB/5388msec); 0 zone resets 00:12:27.107 slat (usec): min=13, max=825, avg=34.72, stdev=40.11 00:12:27.107 clat (usec): min=1293, max=1040.9k, avg=665352.58, stdev=122478.43 00:12:27.107 lat (usec): min=1363, max=1040.9k, avg=665387.29, stdev=122480.97 00:12:27.107 clat percentiles (msec): 00:12:27.107 | 1.00th=[ 57], 5.00th=[ 456], 10.00th=[ 600], 20.00th=[ 642], 00:12:27.107 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 676], 60.00th=[ 684], 00:12:27.107 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 743], 95.00th=[ 827], 00:12:27.107 | 99.00th=[ 1020], 99.50th=[ 1028], 99.90th=[ 1045], 99.95th=[ 1045], 00:12:27.107 | 99.99th=[ 1045] 00:12:27.107 bw ( KiB/s): min= 5620, max=12032, per=3.16%, avg=10774.20, stdev=1875.02, samples=10 00:12:27.107 iops : min= 43, max= 94, avg=84.00, stdev=14.93, samples=10 00:12:27.107 lat (msec) : 2=0.21%, 4=0.64%, 10=0.97%, 20=0.97%, 50=38.95% 00:12:27.107 lat (msec) : 100=3.86%, 250=3.43%, 500=2.47%, 750=44.10%, 1000=3.76% 00:12:27.107 lat (msec) : 2000=0.64% 00:12:27.107 cpu : usr=0.24%, sys=0.48%, ctx=579, majf=0, minf=1 00:12:27.107 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.2% 00:12:27.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.107 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.107 issued rwts: total=450,482,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.107 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.107 job4: (groupid=0, jobs=1): err= 0: pid=71773: Thu Jul 25 10:12:59 2024 00:12:27.107 read: IOPS=87, BW=10.9MiB/s (11.4MB/s)(58.1MiB/5339msec) 00:12:27.107 slat (usec): min=7, max=130, avg=27.23, stdev=15.90 00:12:27.107 clat (msec): min=31, max=361, avg=58.07, stdev=37.15 00:12:27.107 lat (msec): min=31, max=361, avg=58.10, stdev=37.15 00:12:27.107 clat percentiles (msec): 00:12:27.107 | 1.00th=[ 35], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 44], 00:12:27.107 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 47], 00:12:27.107 | 70.00th=[ 48], 80.00th=[ 52], 90.00th=[ 106], 95.00th=[ 142], 00:12:27.107 | 99.00th=[ 169], 99.50th=[ 342], 99.90th=[ 363], 99.95th=[ 363], 00:12:27.107 | 99.99th=[ 363] 00:12:27.107 bw ( KiB/s): min= 8960, max=22272, per=3.49%, avg=11827.20, stdev=3872.67, samples=10 00:12:27.107 iops : min= 70, max= 174, avg=92.40, stdev=30.26, samples=10 00:12:27.107 write: IOPS=89, BW=11.2MiB/s (11.8MB/s)(60.0MiB/5339msec); 0 zone resets 00:12:27.107 slat (usec): min=13, max=186, avg=33.43, stdev=16.12 00:12:27.107 clat (msec): min=168, max=965, avg=654.75, stdev=100.28 00:12:27.107 lat (msec): min=168, max=965, avg=654.78, stdev=100.28 00:12:27.107 clat percentiles (msec): 00:12:27.107 | 1.00th=[ 275], 5.00th=[ 447], 10.00th=[ 558], 20.00th=[ 625], 00:12:27.107 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 676], 00:12:27.107 | 70.00th=[ 693], 80.00th=[ 701], 90.00th=[ 726], 95.00th=[ 743], 00:12:27.108 | 99.00th=[ 944], 99.50th=[ 953], 99.90th=[ 969], 99.95th=[ 969], 00:12:27.108 | 99.99th=[ 969] 00:12:27.108 bw ( KiB/s): min= 5120, max=12032, per=3.16%, avg=10752.00, stdev=2040.88, samples=10 00:12:27.108 iops : min= 40, max= 94, avg=84.00, stdev=15.94, samples=10 00:12:27.108 lat (msec) : 50=38.73%, 100=4.76%, 250=5.82%, 500=3.39%, 750=45.08% 00:12:27.108 lat (msec) : 1000=2.22% 00:12:27.108 cpu : usr=0.15%, sys=0.64%, ctx=573, majf=0, minf=1 00:12:27.108 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.3% 00:12:27.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.108 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.108 issued rwts: total=465,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.108 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.108 job5: (groupid=0, jobs=1): err= 0: pid=71774: Thu Jul 25 10:12:59 2024 00:12:27.108 read: IOPS=91, BW=11.4MiB/s (12.0MB/s)(61.4MiB/5367msec) 00:12:27.108 slat (nsec): min=8641, max=78595, avg=21304.94, stdev=10382.38 00:12:27.108 clat (msec): min=24, max=385, avg=53.87, stdev=36.53 00:12:27.108 lat (msec): min=24, max=385, avg=53.89, stdev=36.53 00:12:27.108 clat percentiles (msec): 00:12:27.108 | 1.00th=[ 31], 5.00th=[ 34], 10.00th=[ 42], 20.00th=[ 43], 00:12:27.108 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:12:27.108 | 70.00th=[ 47], 80.00th=[ 50], 90.00th=[ 66], 95.00th=[ 118], 00:12:27.108 | 99.00th=[ 197], 99.50th=[ 376], 99.90th=[ 384], 99.95th=[ 384], 00:12:27.108 | 99.99th=[ 384] 00:12:27.108 bw ( KiB/s): min= 9216, max=17664, per=3.69%, avg=12489.80, stdev=2928.13, samples=10 00:12:27.108 iops : min= 72, max= 138, avg=97.50, stdev=22.81, samples=10 00:12:27.108 write: IOPS=89, BW=11.1MiB/s (11.7MB/s)(59.8MiB/5367msec); 0 zone resets 00:12:27.108 slat (usec): min=10, max=409, avg=28.30, stdev=25.97 00:12:27.108 clat (msec): min=197, max=1001, avg=662.36, stdev=98.97 00:12:27.108 lat (msec): min=197, max=1001, avg=662.39, stdev=98.97 00:12:27.108 clat percentiles (msec): 00:12:27.108 | 1.00th=[ 309], 5.00th=[ 447], 10.00th=[ 592], 20.00th=[ 642], 00:12:27.108 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 676], 00:12:27.108 | 70.00th=[ 693], 80.00th=[ 701], 90.00th=[ 718], 95.00th=[ 785], 00:12:27.108 | 99.00th=[ 978], 99.50th=[ 1003], 99.90th=[ 1003], 99.95th=[ 1003], 00:12:27.108 | 99.99th=[ 1003] 00:12:27.108 bw ( KiB/s): min= 4608, max=12032, per=3.14%, avg=10698.40, stdev=2193.62, samples=10 00:12:27.108 iops : min= 36, max= 94, avg=83.50, stdev=17.10, samples=10 00:12:27.108 lat (msec) : 50=42.11%, 100=5.06%, 250=3.51%, 500=3.30%, 750=43.03% 00:12:27.108 lat (msec) : 1000=2.68%, 2000=0.31% 00:12:27.108 cpu : usr=0.13%, sys=0.48%, ctx=594, majf=0, minf=1 00:12:27.108 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.5% 00:12:27.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.108 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.108 issued rwts: total=491,478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.108 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.108 job6: (groupid=0, jobs=1): err= 0: pid=71794: Thu Jul 25 10:12:59 2024 00:12:27.108 read: IOPS=87, BW=10.9MiB/s (11.4MB/s)(58.6MiB/5381msec) 00:12:27.108 slat (usec): min=7, max=414, avg=32.37, stdev=31.04 00:12:27.108 clat (msec): min=2, max=406, avg=54.13, stdev=44.35 00:12:27.108 lat (msec): min=2, max=406, avg=54.17, stdev=44.35 00:12:27.108 clat percentiles (msec): 00:12:27.108 | 1.00th=[ 5], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 43], 00:12:27.108 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:12:27.108 | 70.00th=[ 47], 80.00th=[ 50], 90.00th=[ 58], 95.00th=[ 115], 00:12:27.108 | 99.00th=[ 228], 99.50th=[ 384], 99.90th=[ 405], 99.95th=[ 405], 00:12:27.108 | 99.99th=[ 405] 00:12:27.108 bw ( KiB/s): min= 7936, max=16128, per=3.52%, avg=11927.10, stdev=2482.21, samples=10 00:12:27.108 iops : min= 62, max= 126, avg=93.10, stdev=19.38, samples=10 00:12:27.108 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(59.6MiB/5381msec); 0 zone resets 00:12:27.108 slat (usec): min=9, max=772, avg=41.69, stdev=40.44 00:12:27.108 clat (msec): min=41, max=1056, avg=667.80, stdev=109.44 00:12:27.108 lat (msec): min=41, max=1056, avg=667.84, stdev=109.44 00:12:27.108 clat percentiles (msec): 00:12:27.108 | 1.00th=[ 239], 5.00th=[ 485], 10.00th=[ 609], 20.00th=[ 642], 00:12:27.108 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 676], 60.00th=[ 684], 00:12:27.108 | 70.00th=[ 701], 80.00th=[ 709], 90.00th=[ 735], 95.00th=[ 827], 00:12:27.108 | 99.00th=[ 1003], 99.50th=[ 1028], 99.90th=[ 1053], 99.95th=[ 1053], 00:12:27.108 | 99.99th=[ 1053] 00:12:27.108 bw ( KiB/s): min= 4864, max=12032, per=3.13%, avg=10673.00, stdev=2111.13, samples=10 00:12:27.108 iops : min= 38, max= 94, avg=83.30, stdev=16.49, samples=10 00:12:27.108 lat (msec) : 4=0.21%, 10=0.63%, 20=0.95%, 50=40.17%, 100=4.33% 00:12:27.108 lat (msec) : 250=3.59%, 500=2.75%, 750=43.23%, 1000=3.59%, 2000=0.53% 00:12:27.108 cpu : usr=0.13%, sys=0.59%, ctx=593, majf=0, minf=1 00:12:27.108 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.3% 00:12:27.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.108 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.108 issued rwts: total=469,477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.108 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.108 job7: (groupid=0, jobs=1): err= 0: pid=71868: Thu Jul 25 10:12:59 2024 00:12:27.108 read: IOPS=78, BW=9.87MiB/s (10.4MB/s)(53.0MiB/5368msec) 00:12:27.108 slat (usec): min=8, max=1883, avg=43.32, stdev=140.35 00:12:27.108 clat (msec): min=22, max=403, avg=55.70, stdev=35.48 00:12:27.108 lat (msec): min=22, max=403, avg=55.74, stdev=35.47 00:12:27.108 clat percentiles (msec): 00:12:27.108 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 43], 00:12:27.108 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 47], 00:12:27.108 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 90], 95.00th=[ 131], 00:12:27.108 | 99.00th=[ 167], 99.50th=[ 171], 99.90th=[ 405], 99.95th=[ 405], 00:12:27.108 | 99.99th=[ 405] 00:12:27.108 bw ( KiB/s): min= 7424, max=16673, per=3.18%, avg=10778.60, stdev=2949.24, samples=10 00:12:27.108 iops : min= 58, max= 130, avg=84.10, stdev=22.97, samples=10 00:12:27.108 write: IOPS=89, BW=11.1MiB/s (11.7MB/s)(59.8MiB/5368msec); 0 zone resets 00:12:27.108 slat (usec): min=13, max=3337, avg=49.50, stdev=160.10 00:12:27.108 clat (msec): min=196, max=1042, avg=668.30, stdev=101.12 00:12:27.108 lat (msec): min=196, max=1042, avg=668.35, stdev=101.12 00:12:27.108 clat percentiles (msec): 00:12:27.108 | 1.00th=[ 309], 5.00th=[ 460], 10.00th=[ 617], 20.00th=[ 642], 00:12:27.108 | 30.00th=[ 659], 40.00th=[ 667], 50.00th=[ 676], 60.00th=[ 684], 00:12:27.108 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 735], 95.00th=[ 810], 00:12:27.108 | 99.00th=[ 1003], 99.50th=[ 1020], 99.90th=[ 1045], 99.95th=[ 1045], 00:12:27.108 | 99.99th=[ 1045] 00:12:27.108 bw ( KiB/s): min= 4617, max=12032, per=3.13%, avg=10673.70, stdev=2191.65, samples=10 00:12:27.108 iops : min= 36, max= 94, avg=83.30, stdev=17.10, samples=10 00:12:27.108 lat (msec) : 50=38.36%, 100=4.77%, 250=4.10%, 500=3.10%, 750=46.12% 00:12:27.108 lat (msec) : 1000=3.10%, 2000=0.44% 00:12:27.108 cpu : usr=0.24%, sys=0.41%, ctx=635, majf=0, minf=1 00:12:27.108 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:12:27.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.108 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.108 issued rwts: total=424,478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.108 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.108 job8: (groupid=0, jobs=1): err= 0: pid=71910: Thu Jul 25 10:12:59 2024 00:12:27.108 read: IOPS=80, BW=10.1MiB/s (10.5MB/s)(53.8MiB/5346msec) 00:12:27.108 slat (usec): min=7, max=547, avg=26.83, stdev=28.96 00:12:27.108 clat (msec): min=30, max=368, avg=56.82, stdev=38.07 00:12:27.108 lat (msec): min=30, max=368, avg=56.85, stdev=38.07 00:12:27.108 clat percentiles (msec): 00:12:27.108 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 43], 00:12:27.108 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:12:27.108 | 70.00th=[ 47], 80.00th=[ 50], 90.00th=[ 104], 95.00th=[ 138], 00:12:27.108 | 99.00th=[ 167], 99.50th=[ 359], 99.90th=[ 368], 99.95th=[ 368], 00:12:27.108 | 99.99th=[ 368] 00:12:27.108 bw ( KiB/s): min= 6642, max=17884, per=3.22%, avg=10904.70, stdev=2853.55, samples=10 00:12:27.108 iops : min= 51, max= 139, avg=84.90, stdev=22.25, samples=10 00:12:27.108 write: IOPS=89, BW=11.2MiB/s (11.8MB/s)(60.0MiB/5346msec); 0 zone resets 00:12:27.108 slat (usec): min=10, max=492, avg=40.78, stdev=43.40 00:12:27.108 clat (msec): min=171, max=1005, avg=661.02, stdev=103.58 00:12:27.108 lat (msec): min=171, max=1005, avg=661.06, stdev=103.59 00:12:27.108 clat percentiles (msec): 00:12:27.108 | 1.00th=[ 279], 5.00th=[ 435], 10.00th=[ 567], 20.00th=[ 642], 00:12:27.108 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 684], 00:12:27.108 | 70.00th=[ 693], 80.00th=[ 718], 90.00th=[ 743], 95.00th=[ 768], 00:12:27.108 | 99.00th=[ 953], 99.50th=[ 961], 99.90th=[ 1003], 99.95th=[ 1003], 00:12:27.108 | 99.99th=[ 1003] 00:12:27.108 bw ( KiB/s): min= 5109, max=12032, per=3.16%, avg=10753.30, stdev=2046.51, samples=10 00:12:27.108 iops : min= 39, max= 94, avg=83.70, stdev=16.23, samples=10 00:12:27.108 lat (msec) : 50=38.57%, 100=3.74%, 250=5.05%, 500=3.85%, 750=43.96% 00:12:27.108 lat (msec) : 1000=4.73%, 2000=0.11% 00:12:27.108 cpu : usr=0.36%, sys=0.32%, ctx=595, majf=0, minf=1 00:12:27.108 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:12:27.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.108 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.108 issued rwts: total=430,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.108 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.108 job9: (groupid=0, jobs=1): err= 0: pid=71923: Thu Jul 25 10:12:59 2024 00:12:27.108 read: IOPS=94, BW=11.8MiB/s (12.3MB/s)(62.9MiB/5347msec) 00:12:27.109 slat (usec): min=7, max=443, avg=31.07, stdev=27.74 00:12:27.109 clat (msec): min=31, max=379, avg=59.17, stdev=40.48 00:12:27.109 lat (msec): min=31, max=379, avg=59.20, stdev=40.47 00:12:27.109 clat percentiles (msec): 00:12:27.109 | 1.00th=[ 33], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 43], 00:12:27.109 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 47], 00:12:27.109 | 70.00th=[ 49], 80.00th=[ 56], 90.00th=[ 108], 95.00th=[ 144], 00:12:27.109 | 99.00th=[ 180], 99.50th=[ 359], 99.90th=[ 380], 99.95th=[ 380], 00:12:27.109 | 99.99th=[ 380] 00:12:27.109 bw ( KiB/s): min= 8175, max=28416, per=3.76%, avg=12739.10, stdev=5761.57, samples=10 00:12:27.109 iops : min= 63, max= 222, avg=99.30, stdev=45.10, samples=10 00:12:27.109 write: IOPS=89, BW=11.2MiB/s (11.7MB/s)(59.8MiB/5347msec); 0 zone resets 00:12:27.109 slat (usec): min=11, max=471, avg=40.96, stdev=34.95 00:12:27.109 clat (msec): min=175, max=1005, avg=652.74, stdev=101.16 00:12:27.109 lat (msec): min=175, max=1005, avg=652.78, stdev=101.16 00:12:27.109 clat percentiles (msec): 00:12:27.109 | 1.00th=[ 284], 5.00th=[ 456], 10.00th=[ 542], 20.00th=[ 617], 00:12:27.109 | 30.00th=[ 642], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 676], 00:12:27.109 | 70.00th=[ 684], 80.00th=[ 701], 90.00th=[ 735], 95.00th=[ 751], 00:12:27.109 | 99.00th=[ 953], 99.50th=[ 986], 99.90th=[ 1003], 99.95th=[ 1003], 00:12:27.109 | 99.99th=[ 1003] 00:12:27.109 bw ( KiB/s): min= 4864, max=12032, per=3.15%, avg=10717.20, stdev=2119.28, samples=10 00:12:27.109 iops : min= 38, max= 94, avg=83.50, stdev=16.51, samples=10 00:12:27.109 lat (msec) : 50=39.35%, 100=6.12%, 250=5.81%, 500=3.26%, 750=43.02% 00:12:27.109 lat (msec) : 1000=2.34%, 2000=0.10% 00:12:27.109 cpu : usr=0.22%, sys=0.62%, ctx=598, majf=0, minf=1 00:12:27.109 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.6% 00:12:27.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.109 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.109 issued rwts: total=503,478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.109 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.109 job10: (groupid=0, jobs=1): err= 0: pid=71924: Thu Jul 25 10:12:59 2024 00:12:27.109 read: IOPS=96, BW=12.1MiB/s (12.7MB/s)(64.5MiB/5340msec) 00:12:27.109 slat (usec): min=7, max=300, avg=30.48, stdev=29.13 00:12:27.109 clat (msec): min=31, max=354, avg=55.78, stdev=37.92 00:12:27.109 lat (msec): min=31, max=354, avg=55.81, stdev=37.92 00:12:27.109 clat percentiles (msec): 00:12:27.109 | 1.00th=[ 32], 5.00th=[ 35], 10.00th=[ 42], 20.00th=[ 43], 00:12:27.109 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:12:27.109 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 77], 95.00th=[ 155], 00:12:27.109 | 99.00th=[ 184], 99.50th=[ 347], 99.90th=[ 355], 99.95th=[ 355], 00:12:27.109 | 99.99th=[ 355] 00:12:27.109 bw ( KiB/s): min= 9216, max=18725, per=3.88%, avg=13136.50, stdev=3154.32, samples=10 00:12:27.109 iops : min= 72, max= 146, avg=102.60, stdev=24.59, samples=10 00:12:27.109 write: IOPS=89, BW=11.2MiB/s (11.8MB/s)(59.9MiB/5340msec); 0 zone resets 00:12:27.109 slat (usec): min=11, max=744, avg=38.62, stdev=43.53 00:12:27.109 clat (msec): min=189, max=1001, avg=652.50, stdev=99.03 00:12:27.109 lat (msec): min=189, max=1002, avg=652.54, stdev=99.03 00:12:27.109 clat percentiles (msec): 00:12:27.109 | 1.00th=[ 296], 5.00th=[ 439], 10.00th=[ 558], 20.00th=[ 625], 00:12:27.109 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 676], 00:12:27.109 | 70.00th=[ 684], 80.00th=[ 701], 90.00th=[ 718], 95.00th=[ 735], 00:12:27.109 | 99.00th=[ 944], 99.50th=[ 995], 99.90th=[ 1003], 99.95th=[ 1003], 00:12:27.109 | 99.99th=[ 1003] 00:12:27.109 bw ( KiB/s): min= 4873, max=12032, per=3.15%, avg=10727.30, stdev=2116.70, samples=10 00:12:27.109 iops : min= 38, max= 94, avg=83.80, stdev=16.56, samples=10 00:12:27.109 lat (msec) : 50=43.32%, 100=4.02%, 250=4.62%, 500=3.32%, 750=42.61% 00:12:27.109 lat (msec) : 1000=2.01%, 2000=0.10% 00:12:27.109 cpu : usr=0.37%, sys=0.43%, ctx=616, majf=0, minf=1 00:12:27.109 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:12:27.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.109 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.109 issued rwts: total=516,479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.109 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.109 job11: (groupid=0, jobs=1): err= 0: pid=71925: Thu Jul 25 10:12:59 2024 00:12:27.109 read: IOPS=86, BW=10.8MiB/s (11.3MB/s)(58.0MiB/5370msec) 00:12:27.109 slat (usec): min=9, max=184, avg=25.16, stdev=14.12 00:12:27.109 clat (msec): min=24, max=394, avg=56.49, stdev=36.36 00:12:27.109 lat (msec): min=24, max=394, avg=56.51, stdev=36.36 00:12:27.109 clat percentiles (msec): 00:12:27.109 | 1.00th=[ 32], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 43], 00:12:27.109 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 47], 00:12:27.109 | 70.00th=[ 48], 80.00th=[ 51], 90.00th=[ 105], 95.00th=[ 138], 00:12:27.109 | 99.00th=[ 171], 99.50th=[ 186], 99.90th=[ 397], 99.95th=[ 397], 00:12:27.109 | 99.99th=[ 397] 00:12:27.109 bw ( KiB/s): min= 7153, max=20008, per=3.49%, avg=11829.70, stdev=3825.46, samples=10 00:12:27.109 iops : min= 55, max= 156, avg=92.30, stdev=29.93, samples=10 00:12:27.109 write: IOPS=89, BW=11.1MiB/s (11.7MB/s)(59.9MiB/5370msec); 0 zone resets 00:12:27.109 slat (usec): min=13, max=113, avg=33.23, stdev=12.39 00:12:27.109 clat (msec): min=197, max=1022, avg=661.72, stdev=97.92 00:12:27.109 lat (msec): min=197, max=1022, avg=661.76, stdev=97.92 00:12:27.109 clat percentiles (msec): 00:12:27.109 | 1.00th=[ 309], 5.00th=[ 472], 10.00th=[ 584], 20.00th=[ 634], 00:12:27.109 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 676], 00:12:27.109 | 70.00th=[ 684], 80.00th=[ 709], 90.00th=[ 735], 95.00th=[ 760], 00:12:27.109 | 99.00th=[ 986], 99.50th=[ 1011], 99.90th=[ 1020], 99.95th=[ 1020], 00:12:27.109 | 99.99th=[ 1020] 00:12:27.109 bw ( KiB/s): min= 4617, max=12032, per=3.13%, avg=10673.70, stdev=2191.65, samples=10 00:12:27.109 iops : min= 36, max= 94, avg=83.30, stdev=17.10, samples=10 00:12:27.109 lat (msec) : 50=39.77%, 100=4.35%, 250=5.30%, 500=2.86%, 750=44.11% 00:12:27.109 lat (msec) : 1000=3.18%, 2000=0.42% 00:12:27.109 cpu : usr=0.22%, sys=0.43%, ctx=608, majf=0, minf=1 00:12:27.109 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.3% 00:12:27.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.109 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.109 issued rwts: total=464,479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.109 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.109 job12: (groupid=0, jobs=1): err= 0: pid=71926: Thu Jul 25 10:12:59 2024 00:12:27.109 read: IOPS=96, BW=12.1MiB/s (12.7MB/s)(65.0MiB/5385msec) 00:12:27.109 slat (usec): min=8, max=1559, avg=43.53, stdev=106.52 00:12:27.109 clat (msec): min=17, max=398, avg=53.63, stdev=33.56 00:12:27.109 lat (msec): min=17, max=398, avg=53.68, stdev=33.56 00:12:27.109 clat percentiles (msec): 00:12:27.109 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 43], 00:12:27.109 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:12:27.109 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 75], 95.00th=[ 114], 00:12:27.109 | 99.00th=[ 207], 99.50th=[ 226], 99.90th=[ 397], 99.95th=[ 397], 00:12:27.109 | 99.99th=[ 397] 00:12:27.109 bw ( KiB/s): min= 9472, max=20480, per=3.92%, avg=13283.80, stdev=3311.00, samples=10 00:12:27.109 iops : min= 74, max= 160, avg=103.70, stdev=25.89, samples=10 00:12:27.109 write: IOPS=88, BW=11.1MiB/s (11.7MB/s)(59.9MiB/5385msec); 0 zone resets 00:12:27.109 slat (usec): min=12, max=1174, avg=44.35, stdev=93.38 00:12:27.109 clat (msec): min=202, max=1047, avg=660.18, stdev=97.65 00:12:27.109 lat (msec): min=202, max=1047, avg=660.23, stdev=97.66 00:12:27.109 clat percentiles (msec): 00:12:27.109 | 1.00th=[ 317], 5.00th=[ 477], 10.00th=[ 600], 20.00th=[ 634], 00:12:27.109 | 30.00th=[ 642], 40.00th=[ 651], 50.00th=[ 667], 60.00th=[ 676], 00:12:27.109 | 70.00th=[ 693], 80.00th=[ 701], 90.00th=[ 726], 95.00th=[ 793], 00:12:27.109 | 99.00th=[ 995], 99.50th=[ 1036], 99.90th=[ 1045], 99.95th=[ 1045], 00:12:27.109 | 99.99th=[ 1045] 00:12:27.109 bw ( KiB/s): min= 4352, max=12032, per=3.13%, avg=10647.30, stdev=2270.25, samples=10 00:12:27.109 iops : min= 34, max= 94, avg=83.10, stdev=17.70, samples=10 00:12:27.109 lat (msec) : 20=0.20%, 50=42.64%, 100=5.61%, 250=3.80%, 500=2.90% 00:12:27.109 lat (msec) : 750=42.14%, 1000=2.30%, 2000=0.40% 00:12:27.109 cpu : usr=0.26%, sys=0.46%, ctx=688, majf=0, minf=1 00:12:27.109 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:12:27.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.109 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.109 issued rwts: total=520,479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.109 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.109 job13: (groupid=0, jobs=1): err= 0: pid=71927: Thu Jul 25 10:12:59 2024 00:12:27.109 read: IOPS=92, BW=11.5MiB/s (12.1MB/s)(61.8MiB/5347msec) 00:12:27.109 slat (usec): min=8, max=1774, avg=44.09, stdev=126.52 00:12:27.109 clat (msec): min=30, max=366, avg=56.53, stdev=38.18 00:12:27.109 lat (msec): min=30, max=366, avg=56.57, stdev=38.18 00:12:27.109 clat percentiles (msec): 00:12:27.109 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 43], 00:12:27.109 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 47], 00:12:27.109 | 70.00th=[ 48], 80.00th=[ 51], 90.00th=[ 90], 95.00th=[ 136], 00:12:27.109 | 99.00th=[ 178], 99.50th=[ 368], 99.90th=[ 368], 99.95th=[ 368], 00:12:27.109 | 99.99th=[ 368] 00:12:27.109 bw ( KiB/s): min= 8192, max=21034, per=3.69%, avg=12514.10, stdev=3436.94, samples=10 00:12:27.109 iops : min= 64, max= 164, avg=97.50, stdev=26.87, samples=10 00:12:27.109 write: IOPS=89, BW=11.2MiB/s (11.7MB/s)(59.9MiB/5347msec); 0 zone resets 00:12:27.109 slat (usec): min=11, max=1268, avg=50.29, stdev=89.35 00:12:27.109 clat (msec): min=179, max=980, avg=655.14, stdev=100.76 00:12:27.109 lat (msec): min=179, max=980, avg=655.19, stdev=100.76 00:12:27.109 clat percentiles (msec): 00:12:27.109 | 1.00th=[ 288], 5.00th=[ 439], 10.00th=[ 558], 20.00th=[ 634], 00:12:27.110 | 30.00th=[ 642], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 676], 00:12:27.110 | 70.00th=[ 693], 80.00th=[ 701], 90.00th=[ 718], 95.00th=[ 760], 00:12:27.110 | 99.00th=[ 961], 99.50th=[ 969], 99.90th=[ 978], 99.95th=[ 978], 00:12:27.110 | 99.99th=[ 978] 00:12:27.110 bw ( KiB/s): min= 5130, max=12032, per=3.15%, avg=10743.90, stdev=2031.00, samples=10 00:12:27.110 iops : min= 40, max= 94, avg=83.70, stdev=15.84, samples=10 00:12:27.110 lat (msec) : 50=40.70%, 100=5.24%, 250=4.83%, 500=3.49%, 750=43.27% 00:12:27.110 lat (msec) : 1000=2.47% 00:12:27.110 cpu : usr=0.21%, sys=0.49%, ctx=708, majf=0, minf=1 00:12:27.110 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5% 00:12:27.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.110 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.110 issued rwts: total=494,479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.110 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.110 job14: (groupid=0, jobs=1): err= 0: pid=71928: Thu Jul 25 10:12:59 2024 00:12:27.110 read: IOPS=85, BW=10.7MiB/s (11.2MB/s)(57.5MiB/5359msec) 00:12:27.110 slat (usec): min=7, max=1140, avg=33.51, stdev=66.13 00:12:27.110 clat (msec): min=25, max=389, avg=56.60, stdev=44.82 00:12:27.110 lat (msec): min=25, max=389, avg=56.63, stdev=44.82 00:12:27.110 clat percentiles (msec): 00:12:27.110 | 1.00th=[ 33], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 43], 00:12:27.110 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:12:27.110 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 72], 95.00th=[ 138], 00:12:27.110 | 99.00th=[ 368], 99.50th=[ 380], 99.90th=[ 388], 99.95th=[ 388], 00:12:27.110 | 99.99th=[ 388] 00:12:27.110 bw ( KiB/s): min= 9472, max=14336, per=3.44%, avg=11645.40, stdev=1462.66, samples=10 00:12:27.110 iops : min= 74, max= 112, avg=90.90, stdev=11.38, samples=10 00:12:27.110 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(59.4MiB/5359msec); 0 zone resets 00:12:27.110 slat (usec): min=12, max=589, avg=39.29, stdev=38.42 00:12:27.110 clat (msec): min=203, max=1006, avg=666.32, stdev=97.37 00:12:27.110 lat (msec): min=203, max=1006, avg=666.36, stdev=97.37 00:12:27.110 clat percentiles (msec): 00:12:27.110 | 1.00th=[ 317], 5.00th=[ 477], 10.00th=[ 600], 20.00th=[ 642], 00:12:27.110 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 676], 00:12:27.110 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 743], 95.00th=[ 776], 00:12:27.110 | 99.00th=[ 969], 99.50th=[ 986], 99.90th=[ 1003], 99.95th=[ 1003], 00:12:27.110 | 99.99th=[ 1003] 00:12:27.110 bw ( KiB/s): min= 4352, max=12288, per=3.13%, avg=10672.80, stdev=2282.25, samples=10 00:12:27.110 iops : min= 34, max= 96, avg=83.30, stdev=17.79, samples=10 00:12:27.110 lat (msec) : 50=41.50%, 100=4.06%, 250=3.42%, 500=3.21%, 750=43.64% 00:12:27.110 lat (msec) : 1000=4.06%, 2000=0.11% 00:12:27.110 cpu : usr=0.26%, sys=0.47%, ctx=644, majf=0, minf=1 00:12:27.110 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.3% 00:12:27.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.110 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.110 issued rwts: total=460,475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.110 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.110 job15: (groupid=0, jobs=1): err= 0: pid=71929: Thu Jul 25 10:12:59 2024 00:12:27.110 read: IOPS=84, BW=10.5MiB/s (11.0MB/s)(56.6MiB/5383msec) 00:12:27.110 slat (usec): min=6, max=744, avg=32.72, stdev=59.49 00:12:27.110 clat (msec): min=16, max=404, avg=53.88, stdev=35.88 00:12:27.110 lat (msec): min=16, max=404, avg=53.91, stdev=35.88 00:12:27.110 clat percentiles (msec): 00:12:27.110 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 42], 20.00th=[ 44], 00:12:27.110 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 47], 00:12:27.110 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 73], 95.00th=[ 107], 00:12:27.110 | 99.00th=[ 215], 99.50th=[ 230], 99.90th=[ 405], 99.95th=[ 405], 00:12:27.110 | 99.99th=[ 405] 00:12:27.110 bw ( KiB/s): min= 7936, max=16896, per=3.40%, avg=11517.80, stdev=2672.07, samples=10 00:12:27.110 iops : min= 62, max= 132, avg=89.90, stdev=20.90, samples=10 00:12:27.110 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(59.6MiB/5383msec); 0 zone resets 00:12:27.110 slat (usec): min=9, max=647, avg=39.66, stdev=40.41 00:12:27.110 clat (msec): min=179, max=1021, avg=670.11, stdev=100.48 00:12:27.110 lat (msec): min=179, max=1021, avg=670.15, stdev=100.48 00:12:27.110 clat percentiles (msec): 00:12:27.110 | 1.00th=[ 309], 5.00th=[ 460], 10.00th=[ 600], 20.00th=[ 642], 00:12:27.110 | 30.00th=[ 659], 40.00th=[ 667], 50.00th=[ 676], 60.00th=[ 684], 00:12:27.110 | 70.00th=[ 693], 80.00th=[ 718], 90.00th=[ 743], 95.00th=[ 785], 00:12:27.110 | 99.00th=[ 986], 99.50th=[ 1011], 99.90th=[ 1020], 99.95th=[ 1020], 00:12:27.110 | 99.99th=[ 1020] 00:12:27.110 bw ( KiB/s): min= 4352, max=12032, per=3.13%, avg=10647.50, stdev=2271.38, samples=10 00:12:27.110 iops : min= 34, max= 94, avg=83.10, stdev=17.75, samples=10 00:12:27.110 lat (msec) : 20=0.22%, 50=41.29%, 100=4.52%, 250=2.80%, 500=3.12% 00:12:27.110 lat (msec) : 750=43.55%, 1000=4.09%, 2000=0.43% 00:12:27.110 cpu : usr=0.32%, sys=0.39%, ctx=643, majf=0, minf=1 00:12:27.110 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.2% 00:12:27.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.110 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.110 issued rwts: total=453,477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.110 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.110 job16: (groupid=0, jobs=1): err= 0: pid=71930: Thu Jul 25 10:12:59 2024 00:12:27.110 read: IOPS=94, BW=11.8MiB/s (12.4MB/s)(63.1MiB/5344msec) 00:12:27.110 slat (usec): min=7, max=1231, avg=32.15, stdev=69.83 00:12:27.110 clat (msec): min=30, max=378, avg=55.75, stdev=38.39 00:12:27.110 lat (msec): min=30, max=378, avg=55.78, stdev=38.39 00:12:27.110 clat percentiles (msec): 00:12:27.110 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 40], 00:12:27.110 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 44], 60.00th=[ 45], 00:12:27.110 | 70.00th=[ 47], 80.00th=[ 50], 90.00th=[ 106], 95.00th=[ 138], 00:12:27.110 | 99.00th=[ 180], 99.50th=[ 359], 99.90th=[ 380], 99.95th=[ 380], 00:12:27.110 | 99.99th=[ 380] 00:12:27.110 bw ( KiB/s): min= 9472, max=24112, per=3.79%, avg=12831.90, stdev=4362.20, samples=10 00:12:27.110 iops : min= 74, max= 188, avg=100.00, stdev=33.96, samples=10 00:12:27.110 write: IOPS=89, BW=11.2MiB/s (11.7MB/s)(59.9MiB/5344msec); 0 zone resets 00:12:27.110 slat (usec): min=14, max=461, avg=38.36, stdev=42.95 00:12:27.110 clat (msec): min=170, max=1015, avg=654.29, stdev=101.83 00:12:27.110 lat (msec): min=170, max=1015, avg=654.33, stdev=101.83 00:12:27.110 clat percentiles (msec): 00:12:27.110 | 1.00th=[ 288], 5.00th=[ 426], 10.00th=[ 542], 20.00th=[ 634], 00:12:27.110 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 676], 00:12:27.110 | 70.00th=[ 684], 80.00th=[ 701], 90.00th=[ 726], 95.00th=[ 768], 00:12:27.110 | 99.00th=[ 961], 99.50th=[ 978], 99.90th=[ 1020], 99.95th=[ 1020], 00:12:27.110 | 99.99th=[ 1020] 00:12:27.110 bw ( KiB/s): min= 4873, max=12032, per=3.15%, avg=10729.70, stdev=2118.92, samples=10 00:12:27.110 iops : min= 38, max= 94, avg=83.60, stdev=16.53, samples=10 00:12:27.110 lat (msec) : 50=41.06%, 100=4.57%, 250=5.79%, 500=3.35%, 750=42.68% 00:12:27.110 lat (msec) : 1000=2.44%, 2000=0.10% 00:12:27.110 cpu : usr=0.09%, sys=0.56%, ctx=813, majf=0, minf=1 00:12:27.110 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.6% 00:12:27.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.110 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.110 issued rwts: total=505,479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.110 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.110 job17: (groupid=0, jobs=1): err= 0: pid=71931: Thu Jul 25 10:12:59 2024 00:12:27.110 read: IOPS=82, BW=10.3MiB/s (10.8MB/s)(55.2MiB/5360msec) 00:12:27.110 slat (usec): min=8, max=4095, avg=38.63, stdev=200.19 00:12:27.110 clat (msec): min=32, max=372, avg=56.68, stdev=36.91 00:12:27.110 lat (msec): min=32, max=372, avg=56.72, stdev=36.91 00:12:27.110 clat percentiles (msec): 00:12:27.110 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 43], 00:12:27.110 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 47], 00:12:27.110 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 94], 95.00th=[ 131], 00:12:27.110 | 99.00th=[ 174], 99.50th=[ 363], 99.90th=[ 372], 99.95th=[ 372], 00:12:27.110 | 99.99th=[ 372] 00:12:27.110 bw ( KiB/s): min= 5888, max=21760, per=3.31%, avg=11211.30, stdev=4440.07, samples=10 00:12:27.110 iops : min= 46, max= 170, avg=87.50, stdev=34.77, samples=10 00:12:27.110 write: IOPS=89, BW=11.1MiB/s (11.7MB/s)(59.8MiB/5360msec); 0 zone resets 00:12:27.110 slat (usec): min=13, max=501, avg=40.78, stdev=48.52 00:12:27.110 clat (msec): min=187, max=1031, avg=663.76, stdev=102.77 00:12:27.110 lat (msec): min=187, max=1031, avg=663.80, stdev=102.77 00:12:27.110 clat percentiles (msec): 00:12:27.110 | 1.00th=[ 296], 5.00th=[ 447], 10.00th=[ 592], 20.00th=[ 634], 00:12:27.110 | 30.00th=[ 651], 40.00th=[ 667], 50.00th=[ 676], 60.00th=[ 684], 00:12:27.110 | 70.00th=[ 684], 80.00th=[ 701], 90.00th=[ 743], 95.00th=[ 802], 00:12:27.110 | 99.00th=[ 1003], 99.50th=[ 1020], 99.90th=[ 1028], 99.95th=[ 1028], 00:12:27.110 | 99.99th=[ 1028] 00:12:27.110 bw ( KiB/s): min= 4608, max=12032, per=3.14%, avg=10698.70, stdev=2198.45, samples=10 00:12:27.110 iops : min= 36, max= 94, avg=83.50, stdev=17.19, samples=10 00:12:27.110 lat (msec) : 50=38.80%, 100=4.67%, 250=4.67%, 500=3.26%, 750=44.35% 00:12:27.110 lat (msec) : 1000=3.80%, 2000=0.43% 00:12:27.110 cpu : usr=0.19%, sys=0.41%, ctx=747, majf=0, minf=1 00:12:27.110 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:12:27.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.110 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.110 issued rwts: total=442,478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.110 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.110 job18: (groupid=0, jobs=1): err= 0: pid=71932: Thu Jul 25 10:12:59 2024 00:12:27.110 read: IOPS=87, BW=11.0MiB/s (11.5MB/s)(58.9MiB/5356msec) 00:12:27.110 slat (usec): min=7, max=147, avg=27.85, stdev=16.90 00:12:27.110 clat (msec): min=31, max=385, avg=55.48, stdev=38.90 00:12:27.110 lat (msec): min=31, max=385, avg=55.51, stdev=38.90 00:12:27.111 clat percentiles (msec): 00:12:27.111 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 43], 00:12:27.111 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:12:27.111 | 70.00th=[ 47], 80.00th=[ 50], 90.00th=[ 77], 95.00th=[ 116], 00:12:27.111 | 99.00th=[ 178], 99.50th=[ 363], 99.90th=[ 384], 99.95th=[ 384], 00:12:27.111 | 99.99th=[ 384] 00:12:27.111 bw ( KiB/s): min= 9472, max=17664, per=3.53%, avg=11955.20, stdev=2315.19, samples=10 00:12:27.111 iops : min= 74, max= 138, avg=93.40, stdev=18.09, samples=10 00:12:27.111 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(59.5MiB/5356msec); 0 zone resets 00:12:27.111 slat (usec): min=11, max=491, avg=33.99, stdev=26.48 00:12:27.111 clat (msec): min=192, max=1034, avg=664.18, stdev=97.50 00:12:27.111 lat (msec): min=192, max=1034, avg=664.22, stdev=97.50 00:12:27.111 clat percentiles (msec): 00:12:27.111 | 1.00th=[ 305], 5.00th=[ 481], 10.00th=[ 600], 20.00th=[ 642], 00:12:27.111 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 676], 00:12:27.111 | 70.00th=[ 684], 80.00th=[ 701], 90.00th=[ 743], 95.00th=[ 768], 00:12:27.111 | 99.00th=[ 978], 99.50th=[ 1011], 99.90th=[ 1036], 99.95th=[ 1036], 00:12:27.111 | 99.99th=[ 1036] 00:12:27.111 bw ( KiB/s): min= 4608, max=12032, per=3.13%, avg=10675.20, stdev=2185.77, samples=10 00:12:27.111 iops : min= 36, max= 94, avg=83.40, stdev=17.08, samples=10 00:12:27.111 lat (msec) : 50=40.76%, 100=5.07%, 250=3.91%, 500=2.96%, 750=43.82% 00:12:27.111 lat (msec) : 1000=3.06%, 2000=0.42% 00:12:27.111 cpu : usr=0.19%, sys=0.49%, ctx=619, majf=0, minf=1 00:12:27.111 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.3% 00:12:27.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.111 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.111 issued rwts: total=471,476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.111 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.111 job19: (groupid=0, jobs=1): err= 0: pid=71933: Thu Jul 25 10:12:59 2024 00:12:27.111 read: IOPS=98, BW=12.4MiB/s (13.0MB/s)(66.1MiB/5350msec) 00:12:27.111 slat (usec): min=7, max=202, avg=27.85, stdev=17.12 00:12:27.111 clat (msec): min=31, max=372, avg=56.06, stdev=34.62 00:12:27.111 lat (msec): min=31, max=372, avg=56.08, stdev=34.62 00:12:27.111 clat percentiles (msec): 00:12:27.111 | 1.00th=[ 33], 5.00th=[ 42], 10.00th=[ 42], 20.00th=[ 43], 00:12:27.111 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:12:27.111 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 95], 95.00th=[ 138], 00:12:27.111 | 99.00th=[ 176], 99.50th=[ 203], 99.90th=[ 372], 99.95th=[ 372], 00:12:27.111 | 99.99th=[ 372] 00:12:27.111 bw ( KiB/s): min= 8960, max=21760, per=3.98%, avg=13481.10, stdev=3517.10, samples=10 00:12:27.111 iops : min= 70, max= 170, avg=105.10, stdev=27.54, samples=10 00:12:27.111 write: IOPS=89, BW=11.2MiB/s (11.8MB/s)(60.0MiB/5350msec); 0 zone resets 00:12:27.111 slat (usec): min=10, max=239, avg=35.36, stdev=18.64 00:12:27.111 clat (msec): min=185, max=1020, avg=650.70, stdev=97.09 00:12:27.111 lat (msec): min=185, max=1020, avg=650.73, stdev=97.09 00:12:27.111 clat percentiles (msec): 00:12:27.111 | 1.00th=[ 296], 5.00th=[ 451], 10.00th=[ 575], 20.00th=[ 625], 00:12:27.111 | 30.00th=[ 634], 40.00th=[ 651], 50.00th=[ 659], 60.00th=[ 676], 00:12:27.111 | 70.00th=[ 684], 80.00th=[ 693], 90.00th=[ 718], 95.00th=[ 743], 00:12:27.111 | 99.00th=[ 969], 99.50th=[ 995], 99.90th=[ 1020], 99.95th=[ 1020], 00:12:27.111 | 99.99th=[ 1020] 00:12:27.111 bw ( KiB/s): min= 4864, max=12288, per=3.15%, avg=10717.30, stdev=2123.41, samples=10 00:12:27.111 iops : min= 38, max= 96, avg=83.50, stdev=16.55, samples=10 00:12:27.111 lat (msec) : 50=42.81%, 100=4.56%, 250=5.25%, 500=2.97%, 750=42.22% 00:12:27.111 lat (msec) : 1000=1.98%, 2000=0.20% 00:12:27.111 cpu : usr=0.21%, sys=0.60%, ctx=606, majf=0, minf=1 00:12:27.111 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:12:27.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.111 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.111 issued rwts: total=529,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.111 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.111 job20: (groupid=0, jobs=1): err= 0: pid=71934: Thu Jul 25 10:12:59 2024 00:12:27.111 read: IOPS=86, BW=10.8MiB/s (11.3MB/s)(57.9MiB/5348msec) 00:12:27.111 slat (usec): min=7, max=108, avg=26.53, stdev=13.83 00:12:27.111 clat (msec): min=29, max=350, avg=52.28, stdev=30.03 00:12:27.111 lat (msec): min=29, max=350, avg=52.31, stdev=30.02 00:12:27.111 clat percentiles (msec): 00:12:27.111 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 42], 00:12:27.111 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:12:27.111 | 70.00th=[ 46], 80.00th=[ 48], 90.00th=[ 77], 95.00th=[ 122], 00:12:27.111 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 351], 99.95th=[ 351], 00:12:27.111 | 99.99th=[ 351] 00:12:27.111 bw ( KiB/s): min= 8192, max=15584, per=3.49%, avg=11826.60, stdev=2530.57, samples=10 00:12:27.111 iops : min= 64, max= 121, avg=92.10, stdev=19.81, samples=10 00:12:27.111 write: IOPS=90, BW=11.3MiB/s (11.8MB/s)(60.2MiB/5348msec); 0 zone resets 00:12:27.111 slat (usec): min=10, max=4045, avg=40.38, stdev=183.36 00:12:27.111 clat (msec): min=182, max=1028, avg=658.42, stdev=103.16 00:12:27.111 lat (msec): min=182, max=1028, avg=658.47, stdev=103.13 00:12:27.111 clat percentiles (msec): 00:12:27.111 | 1.00th=[ 292], 5.00th=[ 422], 10.00th=[ 567], 20.00th=[ 634], 00:12:27.111 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 676], 00:12:27.111 | 70.00th=[ 684], 80.00th=[ 701], 90.00th=[ 735], 95.00th=[ 760], 00:12:27.111 | 99.00th=[ 978], 99.50th=[ 1011], 99.90th=[ 1028], 99.95th=[ 1028], 00:12:27.111 | 99.99th=[ 1028] 00:12:27.111 bw ( KiB/s): min= 4854, max=12032, per=3.15%, avg=10727.80, stdev=2124.75, samples=10 00:12:27.111 iops : min= 37, max= 94, avg=83.50, stdev=16.84, samples=10 00:12:27.111 lat (msec) : 50=41.38%, 100=3.28%, 250=4.55%, 500=3.70%, 750=44.13% 00:12:27.111 lat (msec) : 1000=2.65%, 2000=0.32% 00:12:27.111 cpu : usr=0.24%, sys=0.54%, ctx=630, majf=0, minf=1 00:12:27.111 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.3% 00:12:27.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.111 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.111 issued rwts: total=463,482,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.111 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.111 job21: (groupid=0, jobs=1): err= 0: pid=71935: Thu Jul 25 10:12:59 2024 00:12:27.111 read: IOPS=80, BW=10.0MiB/s (10.5MB/s)(54.0MiB/5383msec) 00:12:27.111 slat (usec): min=9, max=671, avg=39.88, stdev=74.65 00:12:27.111 clat (msec): min=9, max=402, avg=52.77, stdev=41.32 00:12:27.111 lat (msec): min=9, max=402, avg=52.80, stdev=41.32 00:12:27.111 clat percentiles (msec): 00:12:27.111 | 1.00th=[ 14], 5.00th=[ 34], 10.00th=[ 40], 20.00th=[ 43], 00:12:27.111 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:12:27.111 | 70.00th=[ 47], 80.00th=[ 48], 90.00th=[ 54], 95.00th=[ 102], 00:12:27.111 | 99.00th=[ 228], 99.50th=[ 384], 99.90th=[ 401], 99.95th=[ 401], 00:12:27.111 | 99.99th=[ 401] 00:12:27.111 bw ( KiB/s): min= 7680, max=14080, per=3.24%, avg=10958.90, stdev=2360.68, samples=10 00:12:27.111 iops : min= 60, max= 110, avg=85.60, stdev=18.45, samples=10 00:12:27.111 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(59.6MiB/5383msec); 0 zone resets 00:12:27.111 slat (usec): min=14, max=561, avg=43.15, stdev=67.83 00:12:27.111 clat (msec): min=90, max=1051, avg=673.41, stdev=105.93 00:12:27.111 lat (msec): min=90, max=1051, avg=673.45, stdev=105.93 00:12:27.111 clat percentiles (msec): 00:12:27.111 | 1.00th=[ 251], 5.00th=[ 481], 10.00th=[ 592], 20.00th=[ 642], 00:12:27.111 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 684], 00:12:27.111 | 70.00th=[ 701], 80.00th=[ 735], 90.00th=[ 760], 95.00th=[ 827], 00:12:27.111 | 99.00th=[ 995], 99.50th=[ 1045], 99.90th=[ 1053], 99.95th=[ 1053], 00:12:27.111 | 99.99th=[ 1053] 00:12:27.111 bw ( KiB/s): min= 4608, max=12032, per=3.13%, avg=10677.30, stdev=2195.83, samples=10 00:12:27.111 iops : min= 36, max= 94, avg=83.40, stdev=17.15, samples=10 00:12:27.112 lat (msec) : 10=0.22%, 20=0.66%, 50=40.70%, 100=3.63%, 250=2.42% 00:12:27.112 lat (msec) : 500=2.97%, 750=42.46%, 1000=6.49%, 2000=0.44% 00:12:27.112 cpu : usr=0.13%, sys=0.45%, ctx=718, majf=0, minf=1 00:12:27.112 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:12:27.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.112 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.112 issued rwts: total=432,477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.112 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.112 job22: (groupid=0, jobs=1): err= 0: pid=71936: Thu Jul 25 10:12:59 2024 00:12:27.112 read: IOPS=103, BW=12.9MiB/s (13.5MB/s)(69.6MiB/5390msec) 00:12:27.112 slat (usec): min=9, max=340, avg=34.67, stdev=32.78 00:12:27.112 clat (msec): min=10, max=391, avg=50.79, stdev=34.56 00:12:27.112 lat (msec): min=10, max=391, avg=50.83, stdev=34.56 00:12:27.112 clat percentiles (msec): 00:12:27.112 | 1.00th=[ 17], 5.00th=[ 34], 10.00th=[ 40], 20.00th=[ 43], 00:12:27.112 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:12:27.112 | 70.00th=[ 47], 80.00th=[ 48], 90.00th=[ 53], 95.00th=[ 77], 00:12:27.112 | 99.00th=[ 218], 99.50th=[ 226], 99.90th=[ 393], 99.95th=[ 393], 00:12:27.112 | 99.99th=[ 393] 00:12:27.112 bw ( KiB/s): min= 8960, max=17920, per=4.20%, avg=14208.00, stdev=2749.93, samples=10 00:12:27.112 iops : min= 70, max= 140, avg=111.00, stdev=21.48, samples=10 00:12:27.112 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(59.9MiB/5390msec); 0 zone resets 00:12:27.112 slat (usec): min=13, max=308, avg=43.94, stdev=28.54 00:12:27.112 clat (msec): min=88, max=997, avg=660.06, stdev=101.01 00:12:27.112 lat (msec): min=88, max=997, avg=660.10, stdev=101.01 00:12:27.112 clat percentiles (msec): 00:12:27.112 | 1.00th=[ 257], 5.00th=[ 460], 10.00th=[ 592], 20.00th=[ 634], 00:12:27.112 | 30.00th=[ 651], 40.00th=[ 651], 50.00th=[ 659], 60.00th=[ 667], 00:12:27.112 | 70.00th=[ 684], 80.00th=[ 701], 90.00th=[ 751], 95.00th=[ 802], 00:12:27.112 | 99.00th=[ 978], 99.50th=[ 986], 99.90th=[ 995], 99.95th=[ 995], 00:12:27.112 | 99.99th=[ 995] 00:12:27.112 bw ( KiB/s): min= 4352, max=12032, per=3.13%, avg=10675.20, stdev=2270.73, samples=10 00:12:27.112 iops : min= 34, max= 94, avg=83.40, stdev=17.74, samples=10 00:12:27.112 lat (msec) : 20=0.77%, 50=46.33%, 100=4.25%, 250=2.61%, 500=2.80% 00:12:27.112 lat (msec) : 750=38.61%, 1000=4.63% 00:12:27.112 cpu : usr=0.32%, sys=0.56%, ctx=618, majf=0, minf=1 00:12:27.112 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:12:27.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.112 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.112 issued rwts: total=557,479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.112 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.112 job23: (groupid=0, jobs=1): err= 0: pid=71937: Thu Jul 25 10:12:59 2024 00:12:27.112 read: IOPS=84, BW=10.6MiB/s (11.1MB/s)(56.9MiB/5385msec) 00:12:27.112 slat (usec): min=6, max=452, avg=27.38, stdev=32.25 00:12:27.112 clat (msec): min=19, max=397, avg=51.83, stdev=32.37 00:12:27.112 lat (msec): min=19, max=397, avg=51.85, stdev=32.37 00:12:27.112 clat percentiles (msec): 00:12:27.112 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 40], 00:12:27.112 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:12:27.112 | 70.00th=[ 46], 80.00th=[ 48], 90.00th=[ 77], 95.00th=[ 109], 00:12:27.112 | 99.00th=[ 201], 99.50th=[ 215], 99.90th=[ 397], 99.95th=[ 397], 00:12:27.112 | 99.99th=[ 397] 00:12:27.112 bw ( KiB/s): min= 8192, max=18944, per=3.43%, avg=11620.50, stdev=3278.18, samples=10 00:12:27.112 iops : min= 64, max= 148, avg=90.70, stdev=25.68, samples=10 00:12:27.112 write: IOPS=88, BW=11.1MiB/s (11.7MB/s)(59.9MiB/5385msec); 0 zone resets 00:12:27.112 slat (usec): min=13, max=527, avg=42.00, stdev=47.58 00:12:27.112 clat (msec): min=197, max=1058, avg=669.32, stdev=99.72 00:12:27.112 lat (msec): min=197, max=1058, avg=669.36, stdev=99.72 00:12:27.112 clat percentiles (msec): 00:12:27.112 | 1.00th=[ 313], 5.00th=[ 464], 10.00th=[ 600], 20.00th=[ 642], 00:12:27.112 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 676], 60.00th=[ 684], 00:12:27.112 | 70.00th=[ 693], 80.00th=[ 718], 90.00th=[ 751], 95.00th=[ 793], 00:12:27.112 | 99.00th=[ 995], 99.50th=[ 1028], 99.90th=[ 1062], 99.95th=[ 1062], 00:12:27.112 | 99.99th=[ 1062] 00:12:27.112 bw ( KiB/s): min= 4096, max=12032, per=3.13%, avg=10647.30, stdev=2352.44, samples=10 00:12:27.112 iops : min= 32, max= 94, avg=83.10, stdev=18.36, samples=10 00:12:27.112 lat (msec) : 20=0.21%, 50=40.69%, 100=4.82%, 250=3.32%, 500=2.89% 00:12:27.112 lat (msec) : 750=43.58%, 1000=4.07%, 2000=0.43% 00:12:27.112 cpu : usr=0.22%, sys=0.45%, ctx=716, majf=0, minf=1 00:12:27.112 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.3% 00:12:27.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.112 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.112 issued rwts: total=455,479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.112 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.112 job24: (groupid=0, jobs=1): err= 0: pid=71938: Thu Jul 25 10:12:59 2024 00:12:27.112 read: IOPS=92, BW=11.6MiB/s (12.1MB/s)(62.1MiB/5378msec) 00:12:27.112 slat (usec): min=8, max=7737, avg=54.84, stdev=349.72 00:12:27.112 clat (msec): min=26, max=395, avg=52.37, stdev=34.47 00:12:27.112 lat (msec): min=26, max=395, avg=52.43, stdev=34.47 00:12:27.112 clat percentiles (msec): 00:12:27.112 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 42], 00:12:27.112 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:12:27.112 | 70.00th=[ 46], 80.00th=[ 48], 90.00th=[ 70], 95.00th=[ 112], 00:12:27.112 | 99.00th=[ 205], 99.50th=[ 222], 99.90th=[ 397], 99.95th=[ 397], 00:12:27.112 | 99.99th=[ 397] 00:12:27.112 bw ( KiB/s): min= 9984, max=16640, per=3.74%, avg=12669.80, stdev=2018.68, samples=10 00:12:27.112 iops : min= 78, max= 130, avg=98.90, stdev=15.85, samples=10 00:12:27.112 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(59.6MiB/5378msec); 0 zone resets 00:12:27.112 slat (usec): min=14, max=1735, avg=53.25, stdev=95.42 00:12:27.112 clat (msec): min=202, max=1034, avg=664.57, stdev=98.68 00:12:27.112 lat (msec): min=202, max=1034, avg=664.62, stdev=98.67 00:12:27.112 clat percentiles (msec): 00:12:27.112 | 1.00th=[ 317], 5.00th=[ 481], 10.00th=[ 600], 20.00th=[ 634], 00:12:27.112 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 676], 00:12:27.112 | 70.00th=[ 684], 80.00th=[ 701], 90.00th=[ 726], 95.00th=[ 785], 00:12:27.112 | 99.00th=[ 1011], 99.50th=[ 1036], 99.90th=[ 1036], 99.95th=[ 1036], 00:12:27.112 | 99.99th=[ 1036] 00:12:27.112 bw ( KiB/s): min= 4096, max=12032, per=3.13%, avg=10647.20, stdev=2351.83, samples=10 00:12:27.112 iops : min= 32, max= 94, avg=83.10, stdev=18.33, samples=10 00:12:27.112 lat (msec) : 50=42.51%, 100=5.34%, 250=3.39%, 500=2.67%, 750=43.12% 00:12:27.112 lat (msec) : 1000=2.46%, 2000=0.51% 00:12:27.112 cpu : usr=0.22%, sys=0.50%, ctx=819, majf=0, minf=1 00:12:27.112 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5% 00:12:27.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.112 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.112 issued rwts: total=497,477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.112 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.112 job25: (groupid=0, jobs=1): err= 0: pid=71939: Thu Jul 25 10:12:59 2024 00:12:27.112 read: IOPS=91, BW=11.5MiB/s (12.0MB/s)(61.5MiB/5363msec) 00:12:27.112 slat (usec): min=7, max=3949, avg=35.39, stdev=182.24 00:12:27.112 clat (msec): min=17, max=387, avg=54.91, stdev=43.38 00:12:27.112 lat (msec): min=17, max=387, avg=54.95, stdev=43.41 00:12:27.112 clat percentiles (msec): 00:12:27.112 | 1.00th=[ 29], 5.00th=[ 34], 10.00th=[ 38], 20.00th=[ 42], 00:12:27.112 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:12:27.112 | 70.00th=[ 47], 80.00th=[ 48], 90.00th=[ 70], 95.00th=[ 129], 00:12:27.112 | 99.00th=[ 368], 99.50th=[ 376], 99.90th=[ 388], 99.95th=[ 388], 00:12:27.112 | 99.99th=[ 388] 00:12:27.112 bw ( KiB/s): min= 9984, max=17152, per=3.68%, avg=12467.20, stdev=2273.94, samples=10 00:12:27.112 iops : min= 78, max= 134, avg=97.40, stdev=17.77, samples=10 00:12:27.112 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(59.5MiB/5363msec); 0 zone resets 00:12:27.112 slat (usec): min=11, max=387, avg=35.29, stdev=27.15 00:12:27.112 clat (msec): min=192, max=1003, avg=663.36, stdev=98.48 00:12:27.112 lat (msec): min=192, max=1003, avg=663.40, stdev=98.48 00:12:27.112 clat percentiles (msec): 00:12:27.112 | 1.00th=[ 309], 5.00th=[ 472], 10.00th=[ 592], 20.00th=[ 642], 00:12:27.112 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 684], 00:12:27.112 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 726], 95.00th=[ 785], 00:12:27.112 | 99.00th=[ 978], 99.50th=[ 1003], 99.90th=[ 1003], 99.95th=[ 1003], 00:12:27.112 | 99.99th=[ 1003] 00:12:27.112 bw ( KiB/s): min= 4608, max=12032, per=3.13%, avg=10675.20, stdev=2199.05, samples=10 00:12:27.112 iops : min= 36, max= 94, avg=83.40, stdev=17.18, samples=10 00:12:27.112 lat (msec) : 20=0.21%, 50=42.67%, 100=4.55%, 250=3.31%, 500=3.31% 00:12:27.112 lat (msec) : 750=43.39%, 1000=2.27%, 2000=0.31% 00:12:27.112 cpu : usr=0.28%, sys=0.43%, ctx=631, majf=0, minf=1 00:12:27.112 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.5% 00:12:27.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.112 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.112 issued rwts: total=492,476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.112 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.112 job26: (groupid=0, jobs=1): err= 0: pid=71940: Thu Jul 25 10:12:59 2024 00:12:27.112 read: IOPS=81, BW=10.2MiB/s (10.7MB/s)(54.8MiB/5361msec) 00:12:27.112 slat (usec): min=7, max=673, avg=35.07, stdev=53.83 00:12:27.112 clat (msec): min=29, max=378, avg=53.95, stdev=33.66 00:12:27.112 lat (msec): min=29, max=378, avg=53.98, stdev=33.65 00:12:27.112 clat percentiles (msec): 00:12:27.112 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 42], 00:12:27.112 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 45], 00:12:27.112 | 70.00th=[ 47], 80.00th=[ 50], 90.00th=[ 83], 95.00th=[ 133], 00:12:27.112 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 380], 99.95th=[ 380], 00:12:27.112 | 99.99th=[ 380] 00:12:27.113 bw ( KiB/s): min= 8431, max=18688, per=3.30%, avg=11185.50, stdev=2888.16, samples=10 00:12:27.113 iops : min= 65, max= 146, avg=87.30, stdev=22.66, samples=10 00:12:27.113 write: IOPS=89, BW=11.2MiB/s (11.7MB/s)(60.0MiB/5361msec); 0 zone resets 00:12:27.113 slat (usec): min=13, max=5929, avg=57.23, stdev=273.66 00:12:27.113 clat (msec): min=182, max=1005, avg=663.79, stdev=101.94 00:12:27.113 lat (msec): min=188, max=1005, avg=663.85, stdev=101.88 00:12:27.113 clat percentiles (msec): 00:12:27.113 | 1.00th=[ 300], 5.00th=[ 443], 10.00th=[ 584], 20.00th=[ 642], 00:12:27.113 | 30.00th=[ 651], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 684], 00:12:27.113 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 735], 95.00th=[ 768], 00:12:27.113 | 99.00th=[ 986], 99.50th=[ 995], 99.90th=[ 1003], 99.95th=[ 1003], 00:12:27.113 | 99.99th=[ 1003] 00:12:27.113 bw ( KiB/s): min= 4608, max=12032, per=3.14%, avg=10698.70, stdev=2205.34, samples=10 00:12:27.113 iops : min= 36, max= 94, avg=83.50, stdev=17.25, samples=10 00:12:27.113 lat (msec) : 50=38.78%, 100=5.01%, 250=4.14%, 500=3.49%, 750=44.44% 00:12:27.113 lat (msec) : 1000=4.03%, 2000=0.11% 00:12:27.113 cpu : usr=0.21%, sys=0.45%, ctx=766, majf=0, minf=1 00:12:27.113 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.1% 00:12:27.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.113 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.113 issued rwts: total=438,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.113 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.113 job27: (groupid=0, jobs=1): err= 0: pid=71941: Thu Jul 25 10:12:59 2024 00:12:27.113 read: IOPS=83, BW=10.5MiB/s (11.0MB/s)(56.2MiB/5380msec) 00:12:27.113 slat (usec): min=9, max=414, avg=28.29, stdev=33.36 00:12:27.113 clat (msec): min=19, max=402, avg=55.27, stdev=37.75 00:12:27.113 lat (msec): min=19, max=402, avg=55.30, stdev=37.75 00:12:27.113 clat percentiles (msec): 00:12:27.113 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 41], 00:12:27.113 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 45], 00:12:27.113 | 70.00th=[ 47], 80.00th=[ 50], 90.00th=[ 102], 95.00th=[ 131], 00:12:27.113 | 99.00th=[ 199], 99.50th=[ 224], 99.90th=[ 405], 99.95th=[ 405], 00:12:27.113 | 99.99th=[ 405] 00:12:27.113 bw ( KiB/s): min= 8192, max=20736, per=3.38%, avg=11441.20, stdev=3746.08, samples=10 00:12:27.113 iops : min= 64, max= 162, avg=89.30, stdev=29.31, samples=10 00:12:27.113 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(59.6MiB/5380msec); 0 zone resets 00:12:27.113 slat (usec): min=13, max=251, avg=32.91, stdev=22.11 00:12:27.113 clat (msec): min=197, max=1052, avg=668.74, stdev=98.03 00:12:27.113 lat (msec): min=197, max=1052, avg=668.77, stdev=98.03 00:12:27.113 clat percentiles (msec): 00:12:27.113 | 1.00th=[ 313], 5.00th=[ 481], 10.00th=[ 617], 20.00th=[ 651], 00:12:27.113 | 30.00th=[ 659], 40.00th=[ 667], 50.00th=[ 676], 60.00th=[ 684], 00:12:27.113 | 70.00th=[ 693], 80.00th=[ 701], 90.00th=[ 726], 95.00th=[ 760], 00:12:27.113 | 99.00th=[ 1011], 99.50th=[ 1028], 99.90th=[ 1053], 99.95th=[ 1053], 00:12:27.113 | 99.99th=[ 1053] 00:12:27.113 bw ( KiB/s): min= 4096, max=12032, per=3.13%, avg=10647.30, stdev=2352.44, samples=10 00:12:27.113 iops : min= 32, max= 94, avg=83.10, stdev=18.36, samples=10 00:12:27.113 lat (msec) : 20=0.22%, 50=38.94%, 100=4.21%, 250=5.39%, 500=2.80% 00:12:27.113 lat (msec) : 750=45.85%, 1000=2.05%, 2000=0.54% 00:12:27.113 cpu : usr=0.17%, sys=0.48%, ctx=721, majf=0, minf=1 00:12:27.113 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:12:27.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.113 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.113 issued rwts: total=450,477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.113 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.113 job28: (groupid=0, jobs=1): err= 0: pid=71942: Thu Jul 25 10:12:59 2024 00:12:27.113 read: IOPS=101, BW=12.7MiB/s (13.3MB/s)(68.1MiB/5383msec) 00:12:27.113 slat (usec): min=7, max=259, avg=25.12, stdev=19.61 00:12:27.113 clat (msec): min=19, max=405, avg=53.67, stdev=33.75 00:12:27.113 lat (msec): min=19, max=405, avg=53.70, stdev=33.74 00:12:27.113 clat percentiles (msec): 00:12:27.113 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 38], 20.00th=[ 42], 00:12:27.113 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:12:27.113 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 79], 95.00th=[ 115], 00:12:27.113 | 99.00th=[ 199], 99.50th=[ 234], 99.90th=[ 405], 99.95th=[ 405], 00:12:27.113 | 99.99th=[ 405] 00:12:27.113 bw ( KiB/s): min=10240, max=22528, per=4.10%, avg=13898.60, stdev=3389.84, samples=10 00:12:27.113 iops : min= 80, max= 176, avg=108.50, stdev=26.56, samples=10 00:12:27.113 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(59.8MiB/5383msec); 0 zone resets 00:12:27.113 slat (usec): min=12, max=281, avg=32.52, stdev=25.09 00:12:27.113 clat (msec): min=197, max=1023, avg=658.59, stdev=101.18 00:12:27.113 lat (msec): min=197, max=1023, avg=658.62, stdev=101.18 00:12:27.113 clat percentiles (msec): 00:12:27.113 | 1.00th=[ 313], 5.00th=[ 456], 10.00th=[ 592], 20.00th=[ 634], 00:12:27.113 | 30.00th=[ 642], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 676], 00:12:27.113 | 70.00th=[ 684], 80.00th=[ 701], 90.00th=[ 726], 95.00th=[ 802], 00:12:27.113 | 99.00th=[ 1003], 99.50th=[ 1011], 99.90th=[ 1020], 99.95th=[ 1020], 00:12:27.113 | 99.99th=[ 1020] 00:12:27.113 bw ( KiB/s): min= 4096, max=12032, per=3.13%, avg=10647.30, stdev=2352.44, samples=10 00:12:27.113 iops : min= 32, max= 94, avg=83.10, stdev=18.36, samples=10 00:12:27.113 lat (msec) : 20=0.10%, 50=44.18%, 100=5.28%, 250=3.91%, 500=3.03% 00:12:27.113 lat (msec) : 750=40.86%, 1000=2.15%, 2000=0.49% 00:12:27.113 cpu : usr=0.17%, sys=0.48%, ctx=707, majf=0, minf=1 00:12:27.113 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:12:27.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.113 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.113 issued rwts: total=545,478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.113 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.113 job29: (groupid=0, jobs=1): err= 0: pid=71943: Thu Jul 25 10:12:59 2024 00:12:27.113 read: IOPS=94, BW=11.9MiB/s (12.4MB/s)(63.4MiB/5340msec) 00:12:27.113 slat (usec): min=8, max=136, avg=23.18, stdev=13.56 00:12:27.113 clat (msec): min=30, max=366, avg=57.62, stdev=42.33 00:12:27.113 lat (msec): min=30, max=366, avg=57.64, stdev=42.33 00:12:27.113 clat percentiles (msec): 00:12:27.113 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 40], 00:12:27.113 | 30.00th=[ 42], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:12:27.113 | 70.00th=[ 47], 80.00th=[ 54], 90.00th=[ 112], 95.00th=[ 150], 00:12:27.113 | 99.00th=[ 186], 99.50th=[ 347], 99.90th=[ 368], 99.95th=[ 368], 00:12:27.113 | 99.99th=[ 368] 00:12:27.113 bw ( KiB/s): min= 8448, max=25600, per=3.80%, avg=12876.80, stdev=4769.59, samples=10 00:12:27.113 iops : min= 66, max= 200, avg=100.60, stdev=37.26, samples=10 00:12:27.113 write: IOPS=89, BW=11.2MiB/s (11.7MB/s)(59.6MiB/5340msec); 0 zone resets 00:12:27.113 slat (usec): min=12, max=189, avg=33.31, stdev=22.26 00:12:27.113 clat (msec): min=182, max=1025, avg=654.28, stdev=100.49 00:12:27.113 lat (msec): min=182, max=1025, avg=654.31, stdev=100.49 00:12:27.113 clat percentiles (msec): 00:12:27.113 | 1.00th=[ 296], 5.00th=[ 456], 10.00th=[ 550], 20.00th=[ 625], 00:12:27.113 | 30.00th=[ 642], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 676], 00:12:27.113 | 70.00th=[ 684], 80.00th=[ 693], 90.00th=[ 726], 95.00th=[ 776], 00:12:27.113 | 99.00th=[ 995], 99.50th=[ 1003], 99.90th=[ 1028], 99.95th=[ 1028], 00:12:27.113 | 99.99th=[ 1028] 00:12:27.113 bw ( KiB/s): min= 4608, max=12032, per=3.14%, avg=10700.80, stdev=2194.91, samples=10 00:12:27.113 iops : min= 36, max= 94, avg=83.60, stdev=17.15, samples=10 00:12:27.113 lat (msec) : 50=40.55%, 100=4.88%, 250=6.00%, 500=3.46%, 750=41.57% 00:12:27.113 lat (msec) : 1000=3.25%, 2000=0.30% 00:12:27.113 cpu : usr=0.15%, sys=0.47%, ctx=848, majf=0, minf=1 00:12:27.113 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.6% 00:12:27.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.113 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.113 issued rwts: total=507,477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.113 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.113 00:12:27.113 Run status group 0 (all jobs): 00:12:27.113 READ: bw=331MiB/s (347MB/s), 9989KiB/s-12.9MiB/s (10.2MB/s-13.5MB/s), io=1784MiB (1871MB), run=5339-5395msec 00:12:27.113 WRITE: bw=333MiB/s (349MB/s), 11.1MiB/s-11.3MiB/s (11.6MB/s-11.8MB/s), io=1795MiB (1882MB), run=5339-5395msec 00:12:27.113 00:12:27.113 Disk stats (read/write): 00:12:27.113 sda: ios=493/461, merge=0/0, ticks=23296/300142, in_queue=323438, util=88.74% 00:12:27.113 sdb: ios=469/463, merge=0/0, ticks=21878/304915, in_queue=326793, util=90.47% 00:12:27.113 sdc: ios=535/460, merge=0/0, ticks=25578/297840, in_queue=323419, util=89.79% 00:12:27.113 sdd: ios=484/468, merge=0/0, ticks=22936/303844, in_queue=326780, util=90.92% 00:12:27.113 sde: ios=484/460, merge=0/0, ticks=26073/296589, in_queue=322662, util=89.61% 00:12:27.113 sdf: ios=496/461, merge=0/0, ticks=25429/299666, in_queue=325095, util=90.22% 00:12:27.113 sdg: ios=493/464, merge=0/0, ticks=24350/302658, in_queue=327008, util=91.35% 00:12:27.113 sdh: ios=424/461, merge=0/0, ticks=22840/302031, in_queue=324871, util=91.12% 00:12:27.113 sdi: ios=430/461, merge=0/0, ticks=23424/300145, in_queue=323570, util=91.27% 00:12:27.113 sdj: ios=503/460, merge=0/0, ticks=28405/294867, in_queue=323272, util=91.90% 00:12:27.113 sdk: ios=516/460, merge=0/0, ticks=27805/295471, in_queue=323276, util=91.90% 00:12:27.113 sdl: ios=464/461, merge=0/0, ticks=25498/299260, in_queue=324758, util=92.82% 00:12:27.113 sdm: ios=520/461, merge=0/0, ticks=27470/298019, in_queue=325489, util=92.94% 00:12:27.113 sdn: ios=494/461, merge=0/0, ticks=26579/297235, in_queue=323815, util=92.72% 00:12:27.113 sdo: ios=460/460, merge=0/0, ticks=24280/300668, in_queue=324948, util=93.30% 00:12:27.113 sdp: ios=453/461, merge=0/0, ticks=23624/302424, in_queue=326048, util=94.09% 00:12:27.113 sdq: ios=505/460, merge=0/0, ticks=27097/296347, in_queue=323444, util=93.28% 00:12:27.113 sdr: ios=442/461, merge=0/0, ticks=24012/300308, in_queue=324321, util=94.16% 00:12:27.113 sds: ios=471/461, merge=0/0, ticks=24773/299690, in_queue=324463, util=94.68% 00:12:27.113 sdt: ios=529/460, merge=0/0, ticks=28964/294287, in_queue=323252, util=94.77% 00:12:27.113 sdu: ios=463/460, merge=0/0, ticks=23871/299353, in_queue=323225, util=94.74% 00:12:27.114 sdv: ios=432/463, merge=0/0, ticks=21671/304778, in_queue=326450, util=96.12% 00:12:27.114 sdw: ios=557/463, merge=0/0, ticks=27481/299097, in_queue=326578, util=95.95% 00:12:27.114 sdx: ios=455/461, merge=0/0, ticks=23180/302676, in_queue=325857, util=96.18% 00:12:27.114 sdy: ios=497/460, merge=0/0, ticks=25249/299141, in_queue=324391, util=95.54% 00:12:27.114 sdz: ios=492/461, merge=0/0, ticks=25306/300036, in_queue=325343, util=96.42% 00:12:27.114 sdaa: ios=438/460, merge=0/0, ticks=23213/300616, in_queue=323830, util=95.90% 00:12:27.114 sdab: ios=450/461, merge=0/0, ticks=24136/301715, in_queue=325852, util=96.90% 00:12:27.114 sdac: ios=545/461, merge=0/0, ticks=28820/297411, in_queue=326231, util=97.02% 00:12:27.114 sdad: ios=507/459, merge=0/0, ticks=27893/295062, in_queue=322956, util=96.88% 00:12:27.114 [2024-07-25 10:12:59.760577] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.114 [2024-07-25 10:12:59.762822] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.114 [2024-07-25 10:12:59.765136] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.114 [2024-07-25 10:12:59.769610] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.114 [2024-07-25 10:12:59.771831] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.114 [2024-07-25 10:12:59.774429] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.114 [2024-07-25 10:12:59.777244] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.114 [2024-07-25 10:12:59.781029] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.114 10:12:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 262144 -d 16 -t randwrite -r 10 00:12:27.114 [2024-07-25 10:12:59.784744] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.114 [2024-07-25 10:12:59.787560] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.114 [2024-07-25 10:12:59.790225] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.114 [2024-07-25 10:12:59.794815] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.114 [global] 00:12:27.114 thread=1 00:12:27.114 invalidate=1 00:12:27.114 rw=randwrite 00:12:27.114 time_based=1 00:12:27.114 runtime=10 00:12:27.114 ioengine=libaio 00:12:27.114 direct=1 00:12:27.114 bs=262144 00:12:27.114 iodepth=16 00:12:27.114 norandommap=1 00:12:27.114 numjobs=1 00:12:27.114 00:12:27.114 [job0] 00:12:27.114 filename=/dev/sda 00:12:27.114 [job1] 00:12:27.114 filename=/dev/sdb 00:12:27.114 [job2] 00:12:27.114 filename=/dev/sdc 00:12:27.114 [job3] 00:12:27.114 filename=/dev/sdd 00:12:27.114 [job4] 00:12:27.114 filename=/dev/sde 00:12:27.114 [job5] 00:12:27.114 filename=/dev/sdf 00:12:27.114 [job6] 00:12:27.114 filename=/dev/sdg 00:12:27.114 [job7] 00:12:27.114 filename=/dev/sdh 00:12:27.114 [job8] 00:12:27.114 filename=/dev/sdi 00:12:27.114 [job9] 00:12:27.114 filename=/dev/sdj 00:12:27.114 [job10] 00:12:27.114 filename=/dev/sdk 00:12:27.114 [job11] 00:12:27.114 filename=/dev/sdl 00:12:27.114 [job12] 00:12:27.114 filename=/dev/sdm 00:12:27.114 [job13] 00:12:27.114 filename=/dev/sdn 00:12:27.114 [job14] 00:12:27.114 filename=/dev/sdo 00:12:27.114 [job15] 00:12:27.114 filename=/dev/sdp 00:12:27.114 [job16] 00:12:27.114 filename=/dev/sdq 00:12:27.114 [job17] 00:12:27.114 filename=/dev/sdr 00:12:27.114 [job18] 00:12:27.114 filename=/dev/sds 00:12:27.114 [job19] 00:12:27.114 filename=/dev/sdt 00:12:27.114 [job20] 00:12:27.114 filename=/dev/sdu 00:12:27.114 [job21] 00:12:27.114 filename=/dev/sdv 00:12:27.114 [job22] 00:12:27.114 filename=/dev/sdw 00:12:27.114 [job23] 00:12:27.114 filename=/dev/sdx 00:12:27.114 [job24] 00:12:27.114 filename=/dev/sdy 00:12:27.114 [job25] 00:12:27.114 filename=/dev/sdz 00:12:27.114 [job26] 00:12:27.114 filename=/dev/sdaa 00:12:27.114 [job27] 00:12:27.114 filename=/dev/sdab 00:12:27.114 [job28] 00:12:27.114 filename=/dev/sdac 00:12:27.114 [job29] 00:12:27.114 filename=/dev/sdad 00:12:27.373 queue_depth set to 113 (sda) 00:12:27.373 queue_depth set to 113 (sdb) 00:12:27.373 queue_depth set to 113 (sdc) 00:12:27.373 queue_depth set to 113 (sdd) 00:12:27.373 queue_depth set to 113 (sde) 00:12:27.373 queue_depth set to 113 (sdf) 00:12:27.373 queue_depth set to 113 (sdg) 00:12:27.373 queue_depth set to 113 (sdh) 00:12:27.373 queue_depth set to 113 (sdi) 00:12:27.373 queue_depth set to 113 (sdj) 00:12:27.373 queue_depth set to 113 (sdk) 00:12:27.373 queue_depth set to 113 (sdl) 00:12:27.373 queue_depth set to 113 (sdm) 00:12:27.373 queue_depth set to 113 (sdn) 00:12:27.373 queue_depth set to 113 (sdo) 00:12:27.373 queue_depth set to 113 (sdp) 00:12:27.373 queue_depth set to 113 (sdq) 00:12:27.373 queue_depth set to 113 (sdr) 00:12:27.373 queue_depth set to 113 (sds) 00:12:27.373 queue_depth set to 113 (sdt) 00:12:27.373 queue_depth set to 113 (sdu) 00:12:27.373 queue_depth set to 113 (sdv) 00:12:27.373 queue_depth set to 113 (sdw) 00:12:27.373 queue_depth set to 113 (sdx) 00:12:27.373 queue_depth set to 113 (sdy) 00:12:27.373 queue_depth set to 113 (sdz) 00:12:27.373 queue_depth set to 113 (sdaa) 00:12:27.373 queue_depth set to 113 (sdab) 00:12:27.373 queue_depth set to 113 (sdac) 00:12:27.373 queue_depth set to 113 (sdad) 00:12:27.373 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job11: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job12: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job13: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job14: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job15: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job16: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job17: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job18: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job19: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job20: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job21: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job22: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job23: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job24: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job25: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job26: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job27: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job28: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 job29: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:27.373 fio-3.35 00:12:27.373 Starting 30 threads 00:12:27.373 [2024-07-25 10:13:00.622642] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.373 [2024-07-25 10:13:00.625326] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.373 [2024-07-25 10:13:00.628120] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.630499] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.632855] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.634962] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.637070] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.639108] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.641156] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.643082] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.644994] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.646983] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.648759] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.650496] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.652420] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.654313] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.656161] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.658065] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.659945] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.661705] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.663453] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.665234] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.667044] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.668951] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.670919] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.673218] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.675431] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.677386] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.679177] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.632 [2024-07-25 10:13:00.681119] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.935 [2024-07-25 10:13:11.487282] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.935 [2024-07-25 10:13:11.514614] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.935 [2024-07-25 10:13:11.519646] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.935 [2024-07-25 10:13:11.522355] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.935 [2024-07-25 10:13:11.524600] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.935 [2024-07-25 10:13:11.526845] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.935 [2024-07-25 10:13:11.529010] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.935 [2024-07-25 10:13:11.531265] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.935 [2024-07-25 10:13:11.533591] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.935 [2024-07-25 10:13:11.539854] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.935 [2024-07-25 10:13:11.542125] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.935 [2024-07-25 10:13:11.544359] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.935 [2024-07-25 10:13:11.546539] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.935 [2024-07-25 10:13:11.548699] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.935 [2024-07-25 10:13:11.550910] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.935 00:12:39.935 job0: (groupid=0, jobs=1): err= 0: pid=72477: Thu Jul 25 10:13:11 2024 00:12:39.935 write: IOPS=81, BW=20.4MiB/s (21.4MB/s)(207MiB/10166msec); 0 zone resets 00:12:39.935 slat (usec): min=17, max=344, avg=66.43, stdev=38.47 00:12:39.935 clat (msec): min=18, max=329, avg=195.93, stdev=25.80 00:12:39.935 lat (msec): min=18, max=329, avg=195.99, stdev=25.81 00:12:39.935 clat percentiles (msec): 00:12:39.935 | 1.00th=[ 109], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.935 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.935 | 70.00th=[ 205], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.935 | 99.00th=[ 247], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 330], 00:12:39.935 | 99.99th=[ 330] 00:12:39.935 bw ( KiB/s): min=17408, max=23040, per=3.33%, avg=20831.95, stdev=1862.47, samples=20 00:12:39.935 iops : min= 68, max= 90, avg=81.25, stdev= 7.24, samples=20 00:12:39.935 lat (msec) : 20=0.12%, 50=0.24%, 100=0.60%, 250=98.07%, 500=0.97% 00:12:39.935 cpu : usr=0.25%, sys=0.34%, ctx=901, majf=0, minf=1 00:12:39.935 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.935 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.935 issued rwts: total=0,829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.935 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.935 job1: (groupid=0, jobs=1): err= 0: pid=72481: Thu Jul 25 10:13:11 2024 00:12:39.935 write: IOPS=82, BW=20.6MiB/s (21.6MB/s)(209MiB/10168msec); 0 zone resets 00:12:39.935 slat (usec): min=23, max=384, avg=51.60, stdev=21.65 00:12:39.935 clat (msec): min=3, max=326, avg=194.11, stdev=31.74 00:12:39.935 lat (msec): min=4, max=326, avg=194.16, stdev=31.74 00:12:39.935 clat percentiles (msec): 00:12:39.935 | 1.00th=[ 20], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.935 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.935 | 70.00th=[ 205], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.935 | 99.00th=[ 247], 99.50th=[ 284], 99.90th=[ 326], 99.95th=[ 326], 00:12:39.935 | 99.99th=[ 326] 00:12:39.935 bw ( KiB/s): min=17373, max=26164, per=3.36%, avg=21037.85, stdev=2231.91, samples=20 00:12:39.935 iops : min= 67, max= 102, avg=82.00, stdev= 8.80, samples=20 00:12:39.935 lat (msec) : 4=0.12%, 10=0.24%, 20=0.72%, 50=0.36%, 100=0.60% 00:12:39.935 lat (msec) : 250=97.13%, 500=0.84% 00:12:39.935 cpu : usr=0.24%, sys=0.33%, ctx=864, majf=0, minf=1 00:12:39.935 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.935 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.935 issued rwts: total=0,837,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.935 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.935 job2: (groupid=0, jobs=1): err= 0: pid=72489: Thu Jul 25 10:13:11 2024 00:12:39.935 write: IOPS=81, BW=20.4MiB/s (21.4MB/s)(208MiB/10177msec); 0 zone resets 00:12:39.935 slat (usec): min=25, max=251, avg=50.67, stdev=20.26 00:12:39.935 clat (msec): min=6, max=332, avg=195.44, stdev=28.36 00:12:39.935 lat (msec): min=6, max=333, avg=195.50, stdev=28.37 00:12:39.935 clat percentiles (msec): 00:12:39.935 | 1.00th=[ 73], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.935 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.935 | 70.00th=[ 205], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.935 | 99.00th=[ 247], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:12:39.935 | 99.99th=[ 334] 00:12:39.935 bw ( KiB/s): min=16896, max=23086, per=3.34%, avg=20909.05, stdev=1926.11, samples=20 00:12:39.935 iops : min= 66, max= 90, avg=81.50, stdev= 7.51, samples=20 00:12:39.935 lat (msec) : 10=0.24%, 20=0.24%, 50=0.36%, 100=0.48%, 250=97.72% 00:12:39.935 lat (msec) : 500=0.96% 00:12:39.935 cpu : usr=0.20%, sys=0.34%, ctx=851, majf=0, minf=1 00:12:39.935 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.935 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.935 issued rwts: total=0,832,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.935 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.935 job3: (groupid=0, jobs=1): err= 0: pid=72490: Thu Jul 25 10:13:11 2024 00:12:39.935 write: IOPS=81, BW=20.4MiB/s (21.4MB/s)(207MiB/10162msec); 0 zone resets 00:12:39.935 slat (usec): min=20, max=278, avg=50.79, stdev=23.32 00:12:39.935 clat (msec): min=23, max=320, avg=195.88, stdev=25.33 00:12:39.935 lat (msec): min=23, max=320, avg=195.93, stdev=25.33 00:12:39.935 clat percentiles (msec): 00:12:39.935 | 1.00th=[ 112], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.935 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.935 | 70.00th=[ 205], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.935 | 99.00th=[ 247], 99.50th=[ 275], 99.90th=[ 321], 99.95th=[ 321], 00:12:39.935 | 99.99th=[ 321] 00:12:39.935 bw ( KiB/s): min=17408, max=23040, per=3.33%, avg=20836.00, stdev=1846.37, samples=20 00:12:39.935 iops : min= 68, max= 90, avg=81.30, stdev= 7.12, samples=20 00:12:39.935 lat (msec) : 50=0.36%, 100=0.48%, 250=98.31%, 500=0.84% 00:12:39.935 cpu : usr=0.20%, sys=0.31%, ctx=860, majf=0, minf=1 00:12:39.935 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.935 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.935 issued rwts: total=0,829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.935 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.935 job4: (groupid=0, jobs=1): err= 0: pid=72497: Thu Jul 25 10:13:11 2024 00:12:39.935 write: IOPS=81, BW=20.4MiB/s (21.4MB/s)(207MiB/10171msec); 0 zone resets 00:12:39.935 slat (usec): min=25, max=372, avg=64.51, stdev=35.29 00:12:39.935 clat (msec): min=17, max=326, avg=196.02, stdev=25.74 00:12:39.935 lat (msec): min=17, max=326, avg=196.08, stdev=25.74 00:12:39.935 clat percentiles (msec): 00:12:39.935 | 1.00th=[ 108], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.935 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.935 | 70.00th=[ 205], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.935 | 99.00th=[ 247], 99.50th=[ 284], 99.90th=[ 326], 99.95th=[ 326], 00:12:39.935 | 99.99th=[ 326] 00:12:39.935 bw ( KiB/s): min=17408, max=23040, per=3.33%, avg=20833.70, stdev=1852.27, samples=20 00:12:39.935 iops : min= 68, max= 90, avg=81.25, stdev= 7.20, samples=20 00:12:39.935 lat (msec) : 20=0.12%, 50=0.24%, 100=0.60%, 250=98.19%, 500=0.84% 00:12:39.935 cpu : usr=0.29%, sys=0.31%, ctx=905, majf=0, minf=1 00:12:39.935 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.935 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.936 issued rwts: total=0,829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.936 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.936 job5: (groupid=0, jobs=1): err= 0: pid=72499: Thu Jul 25 10:13:11 2024 00:12:39.936 write: IOPS=81, BW=20.4MiB/s (21.4MB/s)(207MiB/10169msec); 0 zone resets 00:12:39.936 slat (usec): min=26, max=1443, avg=54.89, stdev=50.81 00:12:39.936 clat (msec): min=16, max=328, avg=195.99, stdev=25.95 00:12:39.936 lat (msec): min=16, max=328, avg=196.05, stdev=25.95 00:12:39.936 clat percentiles (msec): 00:12:39.936 | 1.00th=[ 106], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.936 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.936 | 70.00th=[ 205], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.936 | 99.00th=[ 247], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 330], 00:12:39.936 | 99.99th=[ 330] 00:12:39.936 bw ( KiB/s): min=16896, max=23040, per=3.33%, avg=20833.70, stdev=1851.85, samples=20 00:12:39.936 iops : min= 66, max= 90, avg=81.25, stdev= 7.18, samples=20 00:12:39.936 lat (msec) : 20=0.12%, 50=0.36%, 100=0.48%, 250=98.07%, 500=0.97% 00:12:39.936 cpu : usr=0.26%, sys=0.35%, ctx=841, majf=0, minf=1 00:12:39.936 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.936 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.936 issued rwts: total=0,829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.936 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.936 job6: (groupid=0, jobs=1): err= 0: pid=72513: Thu Jul 25 10:13:11 2024 00:12:39.936 write: IOPS=81, BW=20.4MiB/s (21.4MB/s)(207MiB/10159msec); 0 zone resets 00:12:39.936 slat (usec): min=28, max=351, avg=55.92, stdev=21.39 00:12:39.936 clat (msec): min=20, max=322, avg=196.04, stdev=25.42 00:12:39.936 lat (msec): min=20, max=322, avg=196.10, stdev=25.42 00:12:39.936 clat percentiles (msec): 00:12:39.936 | 1.00th=[ 111], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.936 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.936 | 70.00th=[ 205], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.936 | 99.00th=[ 247], 99.50th=[ 279], 99.90th=[ 321], 99.95th=[ 321], 00:12:39.936 | 99.99th=[ 321] 00:12:39.936 bw ( KiB/s): min=17408, max=23040, per=3.33%, avg=20812.35, stdev=1824.44, samples=20 00:12:39.936 iops : min= 68, max= 90, avg=81.25, stdev= 7.09, samples=20 00:12:39.936 lat (msec) : 50=0.36%, 100=0.60%, 250=98.19%, 500=0.85% 00:12:39.936 cpu : usr=0.23%, sys=0.39%, ctx=853, majf=0, minf=1 00:12:39.936 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.936 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.936 issued rwts: total=0,828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.936 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.936 job7: (groupid=0, jobs=1): err= 0: pid=72577: Thu Jul 25 10:13:11 2024 00:12:39.936 write: IOPS=81, BW=20.4MiB/s (21.3MB/s)(207MiB/10169msec); 0 zone resets 00:12:39.936 slat (usec): min=18, max=734, avg=58.98, stdev=35.14 00:12:39.936 clat (msec): min=17, max=332, avg=196.22, stdev=26.43 00:12:39.936 lat (msec): min=17, max=332, avg=196.28, stdev=26.43 00:12:39.936 clat percentiles (msec): 00:12:39.936 | 1.00th=[ 102], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.936 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.936 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.936 | 99.00th=[ 247], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:12:39.936 | 99.99th=[ 334] 00:12:39.936 bw ( KiB/s): min=17408, max=23040, per=3.33%, avg=20810.80, stdev=1889.39, samples=20 00:12:39.936 iops : min= 68, max= 90, avg=81.20, stdev= 7.37, samples=20 00:12:39.936 lat (msec) : 20=0.12%, 50=0.36%, 100=0.48%, 250=98.07%, 500=0.97% 00:12:39.936 cpu : usr=0.21%, sys=0.35%, ctx=861, majf=0, minf=1 00:12:39.936 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.936 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.936 issued rwts: total=0,828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.936 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.936 job8: (groupid=0, jobs=1): err= 0: pid=72618: Thu Jul 25 10:13:11 2024 00:12:39.936 write: IOPS=81, BW=20.3MiB/s (21.3MB/s)(207MiB/10174msec); 0 zone resets 00:12:39.936 slat (usec): min=24, max=3171, avg=56.48, stdev=110.38 00:12:39.936 clat (msec): min=16, max=329, avg=196.50, stdev=26.12 00:12:39.936 lat (msec): min=19, max=329, avg=196.56, stdev=26.09 00:12:39.936 clat percentiles (msec): 00:12:39.936 | 1.00th=[ 107], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.936 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.936 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.936 | 99.00th=[ 247], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 330], 00:12:39.936 | 99.99th=[ 330] 00:12:39.936 bw ( KiB/s): min=17408, max=23040, per=3.32%, avg=20785.20, stdev=1875.67, samples=20 00:12:39.936 iops : min= 68, max= 90, avg=81.10, stdev= 7.33, samples=20 00:12:39.936 lat (msec) : 20=0.12%, 50=0.36%, 100=0.48%, 250=98.07%, 500=0.97% 00:12:39.936 cpu : usr=0.23%, sys=0.34%, ctx=855, majf=0, minf=1 00:12:39.936 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.936 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.936 issued rwts: total=0,827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.936 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.936 job9: (groupid=0, jobs=1): err= 0: pid=72642: Thu Jul 25 10:13:11 2024 00:12:39.936 write: IOPS=81, BW=20.4MiB/s (21.4MB/s)(207MiB/10163msec); 0 zone resets 00:12:39.936 slat (usec): min=21, max=1454, avg=53.28, stdev=54.49 00:12:39.936 clat (msec): min=19, max=319, avg=195.90, stdev=25.33 00:12:39.936 lat (msec): min=19, max=319, avg=195.95, stdev=25.33 00:12:39.936 clat percentiles (msec): 00:12:39.936 | 1.00th=[ 110], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.936 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.936 | 70.00th=[ 207], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.936 | 99.00th=[ 247], 99.50th=[ 275], 99.90th=[ 321], 99.95th=[ 321], 00:12:39.936 | 99.99th=[ 321] 00:12:39.936 bw ( KiB/s): min=17408, max=23040, per=3.33%, avg=20833.85, stdev=1882.53, samples=20 00:12:39.936 iops : min= 68, max= 90, avg=81.30, stdev= 7.27, samples=20 00:12:39.936 lat (msec) : 20=0.12%, 50=0.24%, 100=0.60%, 250=98.19%, 500=0.84% 00:12:39.936 cpu : usr=0.21%, sys=0.30%, ctx=876, majf=0, minf=1 00:12:39.936 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.936 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.936 issued rwts: total=0,829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.936 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.936 job10: (groupid=0, jobs=1): err= 0: pid=72643: Thu Jul 25 10:13:11 2024 00:12:39.936 write: IOPS=81, BW=20.4MiB/s (21.3MB/s)(207MiB/10170msec); 0 zone resets 00:12:39.936 slat (usec): min=25, max=298, avg=61.10, stdev=27.54 00:12:39.936 clat (msec): min=17, max=329, avg=196.24, stdev=26.05 00:12:39.936 lat (msec): min=17, max=329, avg=196.30, stdev=26.05 00:12:39.936 clat percentiles (msec): 00:12:39.936 | 1.00th=[ 108], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.936 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.936 | 70.00th=[ 205], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.936 | 99.00th=[ 247], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 330], 00:12:39.936 | 99.99th=[ 330] 00:12:39.936 bw ( KiB/s): min=16896, max=23040, per=3.32%, avg=20806.50, stdev=1901.96, samples=20 00:12:39.936 iops : min= 66, max= 90, avg=81.10, stdev= 7.41, samples=20 00:12:39.936 lat (msec) : 20=0.12%, 50=0.24%, 100=0.60%, 250=98.07%, 500=0.97% 00:12:39.936 cpu : usr=0.20%, sys=0.32%, ctx=855, majf=0, minf=1 00:12:39.936 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.936 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.936 issued rwts: total=0,828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.936 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.937 job11: (groupid=0, jobs=1): err= 0: pid=72644: Thu Jul 25 10:13:11 2024 00:12:39.937 write: IOPS=81, BW=20.4MiB/s (21.4MB/s)(207MiB/10169msec); 0 zone resets 00:12:39.937 slat (usec): min=20, max=197, avg=47.36, stdev=15.53 00:12:39.937 clat (msec): min=19, max=327, avg=196.00, stdev=25.75 00:12:39.937 lat (msec): min=19, max=327, avg=196.04, stdev=25.75 00:12:39.937 clat percentiles (msec): 00:12:39.937 | 1.00th=[ 110], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.937 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.937 | 70.00th=[ 205], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.937 | 99.00th=[ 247], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 330], 00:12:39.937 | 99.99th=[ 330] 00:12:39.937 bw ( KiB/s): min=17408, max=23040, per=3.33%, avg=20835.95, stdev=1816.54, samples=20 00:12:39.937 iops : min= 68, max= 90, avg=81.25, stdev= 7.06, samples=20 00:12:39.937 lat (msec) : 20=0.12%, 50=0.24%, 100=0.60%, 250=98.19%, 500=0.84% 00:12:39.937 cpu : usr=0.25%, sys=0.26%, ctx=843, majf=0, minf=1 00:12:39.937 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.937 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.937 issued rwts: total=0,829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.937 job12: (groupid=0, jobs=1): err= 0: pid=72645: Thu Jul 25 10:13:11 2024 00:12:39.937 write: IOPS=82, BW=20.5MiB/s (21.5MB/s)(209MiB/10161msec); 0 zone resets 00:12:39.937 slat (usec): min=24, max=7065, avg=60.27, stdev=243.28 00:12:39.937 clat (msec): min=7, max=327, avg=194.38, stdev=32.11 00:12:39.937 lat (msec): min=7, max=327, avg=194.44, stdev=32.06 00:12:39.937 clat percentiles (msec): 00:12:39.937 | 1.00th=[ 20], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.937 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.937 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.937 | 99.00th=[ 247], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 330], 00:12:39.937 | 99.99th=[ 330] 00:12:39.937 bw ( KiB/s): min=17373, max=26164, per=3.35%, avg=20986.70, stdev=2189.31, samples=20 00:12:39.937 iops : min= 67, max= 102, avg=81.80, stdev= 8.64, samples=20 00:12:39.937 lat (msec) : 10=0.72%, 20=0.36%, 50=0.36%, 100=0.60%, 250=97.13% 00:12:39.937 lat (msec) : 500=0.84% 00:12:39.937 cpu : usr=0.19%, sys=0.40%, ctx=839, majf=0, minf=1 00:12:39.937 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.937 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.937 issued rwts: total=0,835,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.937 job13: (groupid=0, jobs=1): err= 0: pid=72646: Thu Jul 25 10:13:11 2024 00:12:39.937 write: IOPS=81, BW=20.4MiB/s (21.3MB/s)(207MiB/10170msec); 0 zone resets 00:12:39.937 slat (usec): min=24, max=3232, avg=55.67, stdev=111.43 00:12:39.937 clat (msec): min=16, max=332, avg=196.17, stdev=26.08 00:12:39.937 lat (msec): min=19, max=332, avg=196.22, stdev=26.06 00:12:39.937 clat percentiles (msec): 00:12:39.937 | 1.00th=[ 107], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.937 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.937 | 70.00th=[ 205], 80.00th=[ 211], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.937 | 99.00th=[ 247], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:12:39.937 | 99.99th=[ 334] 00:12:39.937 bw ( KiB/s): min=17408, max=23040, per=3.33%, avg=20810.75, stdev=1874.07, samples=20 00:12:39.937 iops : min= 68, max= 90, avg=81.20, stdev= 7.32, samples=20 00:12:39.937 lat (msec) : 20=0.12%, 50=0.36%, 100=0.48%, 250=98.07%, 500=0.97% 00:12:39.937 cpu : usr=0.29%, sys=0.30%, ctx=832, majf=0, minf=1 00:12:39.937 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.937 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.937 issued rwts: total=0,828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.937 job14: (groupid=0, jobs=1): err= 0: pid=72647: Thu Jul 25 10:13:11 2024 00:12:39.937 write: IOPS=81, BW=20.4MiB/s (21.3MB/s)(207MiB/10170msec); 0 zone resets 00:12:39.937 slat (usec): min=17, max=114, avg=50.36, stdev=13.58 00:12:39.937 clat (msec): min=17, max=328, avg=196.26, stdev=25.93 00:12:39.937 lat (msec): min=17, max=328, avg=196.31, stdev=25.93 00:12:39.937 clat percentiles (msec): 00:12:39.937 | 1.00th=[ 107], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.937 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.937 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.937 | 99.00th=[ 247], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 330], 00:12:39.937 | 99.99th=[ 330] 00:12:39.937 bw ( KiB/s): min=16896, max=23040, per=3.32%, avg=20806.50, stdev=1872.72, samples=20 00:12:39.937 iops : min= 66, max= 90, avg=81.10, stdev= 7.30, samples=20 00:12:39.937 lat (msec) : 20=0.12%, 50=0.36%, 100=0.48%, 250=98.07%, 500=0.97% 00:12:39.937 cpu : usr=0.29%, sys=0.29%, ctx=827, majf=0, minf=1 00:12:39.937 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.937 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.937 issued rwts: total=0,828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.937 job15: (groupid=0, jobs=1): err= 0: pid=72648: Thu Jul 25 10:13:11 2024 00:12:39.937 write: IOPS=82, BW=20.6MiB/s (21.6MB/s)(209MiB/10170msec); 0 zone resets 00:12:39.937 slat (usec): min=17, max=195, avg=51.39, stdev=14.76 00:12:39.937 clat (usec): min=1951, max=326689, avg=194139.74, stdev=32571.84 00:12:39.937 lat (usec): min=1999, max=326723, avg=194191.13, stdev=32576.10 00:12:39.937 clat percentiles (msec): 00:12:39.937 | 1.00th=[ 18], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.937 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.937 | 70.00th=[ 205], 80.00th=[ 211], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.937 | 99.00th=[ 247], 99.50th=[ 284], 99.90th=[ 326], 99.95th=[ 326], 00:12:39.937 | 99.99th=[ 326] 00:12:39.937 bw ( KiB/s): min=16862, max=26624, per=3.36%, avg=21035.30, stdev=2317.08, samples=20 00:12:39.937 iops : min= 65, max= 104, avg=82.00, stdev= 9.17, samples=20 00:12:39.937 lat (msec) : 2=0.12%, 4=0.12%, 10=0.36%, 20=0.48%, 50=0.48% 00:12:39.937 lat (msec) : 100=0.60%, 250=97.01%, 500=0.84% 00:12:39.937 cpu : usr=0.16%, sys=0.40%, ctx=842, majf=0, minf=1 00:12:39.937 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.937 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.937 issued rwts: total=0,837,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.937 job16: (groupid=0, jobs=1): err= 0: pid=72649: Thu Jul 25 10:13:11 2024 00:12:39.937 write: IOPS=81, BW=20.4MiB/s (21.4MB/s)(207MiB/10168msec); 0 zone resets 00:12:39.937 slat (usec): min=18, max=237, avg=53.31, stdev=18.23 00:12:39.937 clat (msec): min=18, max=326, avg=195.97, stdev=25.70 00:12:39.937 lat (msec): min=18, max=327, avg=196.03, stdev=25.70 00:12:39.937 clat percentiles (msec): 00:12:39.937 | 1.00th=[ 110], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.937 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.937 | 70.00th=[ 205], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 239], 00:12:39.937 | 99.00th=[ 247], 99.50th=[ 284], 99.90th=[ 326], 99.95th=[ 326], 00:12:39.937 | 99.99th=[ 326] 00:12:39.937 bw ( KiB/s): min=17408, max=23040, per=3.33%, avg=20834.10, stdev=1825.91, samples=20 00:12:39.937 iops : min= 68, max= 90, avg=81.25, stdev= 7.09, samples=20 00:12:39.937 lat (msec) : 20=0.12%, 50=0.24%, 100=0.60%, 250=98.19%, 500=0.84% 00:12:39.937 cpu : usr=0.24%, sys=0.35%, ctx=844, majf=0, minf=1 00:12:39.937 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.937 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.937 issued rwts: total=0,829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.937 job17: (groupid=0, jobs=1): err= 0: pid=72650: Thu Jul 25 10:13:11 2024 00:12:39.937 write: IOPS=81, BW=20.4MiB/s (21.4MB/s)(207MiB/10163msec); 0 zone resets 00:12:39.937 slat (usec): min=16, max=360, avg=51.58, stdev=16.82 00:12:39.937 clat (msec): min=22, max=319, avg=195.90, stdev=25.21 00:12:39.937 lat (msec): min=22, max=319, avg=195.95, stdev=25.21 00:12:39.937 clat percentiles (msec): 00:12:39.937 | 1.00th=[ 113], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.937 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.937 | 70.00th=[ 205], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 239], 00:12:39.937 | 99.00th=[ 247], 99.50th=[ 275], 99.90th=[ 321], 99.95th=[ 321], 00:12:39.937 | 99.99th=[ 321] 00:12:39.937 bw ( KiB/s): min=17408, max=23040, per=3.33%, avg=20834.20, stdev=1886.66, samples=20 00:12:39.937 iops : min= 68, max= 90, avg=81.30, stdev= 7.37, samples=20 00:12:39.937 lat (msec) : 50=0.36%, 100=0.48%, 250=98.31%, 500=0.84% 00:12:39.937 cpu : usr=0.31%, sys=0.28%, ctx=829, majf=0, minf=1 00:12:39.937 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.937 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.937 issued rwts: total=0,829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.938 job18: (groupid=0, jobs=1): err= 0: pid=72651: Thu Jul 25 10:13:11 2024 00:12:39.938 write: IOPS=81, BW=20.4MiB/s (21.3MB/s)(207MiB/10170msec); 0 zone resets 00:12:39.938 slat (usec): min=29, max=126, avg=52.30, stdev=13.45 00:12:39.938 clat (msec): min=16, max=329, avg=196.26, stdev=26.13 00:12:39.938 lat (msec): min=16, max=329, avg=196.31, stdev=26.13 00:12:39.938 clat percentiles (msec): 00:12:39.938 | 1.00th=[ 107], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.938 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.938 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.938 | 99.00th=[ 247], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 330], 00:12:39.938 | 99.99th=[ 330] 00:12:39.938 bw ( KiB/s): min=16896, max=23040, per=3.32%, avg=20804.35, stdev=1901.10, samples=20 00:12:39.938 iops : min= 66, max= 90, avg=81.10, stdev= 7.41, samples=20 00:12:39.938 lat (msec) : 20=0.12%, 50=0.36%, 100=0.48%, 250=98.07%, 500=0.97% 00:12:39.938 cpu : usr=0.21%, sys=0.36%, ctx=827, majf=0, minf=1 00:12:39.938 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.938 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.938 issued rwts: total=0,828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.938 job19: (groupid=0, jobs=1): err= 0: pid=72652: Thu Jul 25 10:13:11 2024 00:12:39.938 write: IOPS=81, BW=20.4MiB/s (21.4MB/s)(207MiB/10171msec); 0 zone resets 00:12:39.938 slat (usec): min=22, max=178, avg=53.37, stdev=15.56 00:12:39.938 clat (msec): min=18, max=326, avg=196.04, stdev=25.69 00:12:39.938 lat (msec): min=18, max=326, avg=196.10, stdev=25.69 00:12:39.938 clat percentiles (msec): 00:12:39.938 | 1.00th=[ 108], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.938 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.938 | 70.00th=[ 207], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.938 | 99.00th=[ 247], 99.50th=[ 284], 99.90th=[ 326], 99.95th=[ 326], 00:12:39.938 | 99.99th=[ 326] 00:12:39.938 bw ( KiB/s): min=17408, max=23040, per=3.32%, avg=20808.05, stdev=1889.65, samples=20 00:12:39.938 iops : min= 68, max= 90, avg=81.15, stdev= 7.33, samples=20 00:12:39.938 lat (msec) : 20=0.12%, 50=0.24%, 100=0.60%, 250=98.19%, 500=0.84% 00:12:39.938 cpu : usr=0.24%, sys=0.37%, ctx=838, majf=0, minf=1 00:12:39.938 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.938 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.938 issued rwts: total=0,829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.938 job20: (groupid=0, jobs=1): err= 0: pid=72653: Thu Jul 25 10:13:11 2024 00:12:39.938 write: IOPS=81, BW=20.3MiB/s (21.3MB/s)(207MiB/10162msec); 0 zone resets 00:12:39.938 slat (usec): min=25, max=215, avg=52.44, stdev=14.92 00:12:39.938 clat (msec): min=19, max=324, avg=196.34, stdev=25.63 00:12:39.938 lat (msec): min=19, max=324, avg=196.40, stdev=25.63 00:12:39.938 clat percentiles (msec): 00:12:39.938 | 1.00th=[ 110], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 180], 00:12:39.938 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.938 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.938 | 99.00th=[ 247], 99.50th=[ 279], 99.90th=[ 326], 99.95th=[ 326], 00:12:39.938 | 99.99th=[ 326] 00:12:39.938 bw ( KiB/s): min=17408, max=23040, per=3.32%, avg=20782.60, stdev=1869.19, samples=20 00:12:39.938 iops : min= 68, max= 90, avg=81.10, stdev= 7.20, samples=20 00:12:39.938 lat (msec) : 20=0.12%, 50=0.24%, 100=0.60%, 250=98.19%, 500=0.85% 00:12:39.938 cpu : usr=0.25%, sys=0.33%, ctx=829, majf=0, minf=1 00:12:39.938 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.938 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.938 issued rwts: total=0,827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.938 job21: (groupid=0, jobs=1): err= 0: pid=72654: Thu Jul 25 10:13:11 2024 00:12:39.938 write: IOPS=81, BW=20.4MiB/s (21.4MB/s)(207MiB/10173msec); 0 zone resets 00:12:39.938 slat (usec): min=25, max=320, avg=51.40, stdev=25.77 00:12:39.938 clat (msec): min=14, max=330, avg=196.08, stdev=26.30 00:12:39.938 lat (msec): min=14, max=330, avg=196.13, stdev=26.30 00:12:39.938 clat percentiles (msec): 00:12:39.938 | 1.00th=[ 102], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.938 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.938 | 70.00th=[ 207], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.938 | 99.00th=[ 247], 99.50th=[ 288], 99.90th=[ 330], 99.95th=[ 330], 00:12:39.938 | 99.99th=[ 330] 00:12:39.938 bw ( KiB/s): min=17408, max=23040, per=3.33%, avg=20834.25, stdev=1850.45, samples=20 00:12:39.938 iops : min= 68, max= 90, avg=81.30, stdev= 7.24, samples=20 00:12:39.938 lat (msec) : 20=0.12%, 50=0.36%, 100=0.48%, 250=98.07%, 500=0.97% 00:12:39.938 cpu : usr=0.21%, sys=0.29%, ctx=862, majf=0, minf=1 00:12:39.938 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.938 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.938 issued rwts: total=0,829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.938 job22: (groupid=0, jobs=1): err= 0: pid=72655: Thu Jul 25 10:13:11 2024 00:12:39.938 write: IOPS=81, BW=20.4MiB/s (21.4MB/s)(207MiB/10175msec); 0 zone resets 00:12:39.938 slat (usec): min=26, max=1323, avg=53.06, stdev=46.42 00:12:39.938 clat (msec): min=10, max=330, avg=196.11, stdev=26.46 00:12:39.938 lat (msec): min=10, max=330, avg=196.16, stdev=26.46 00:12:39.938 clat percentiles (msec): 00:12:39.938 | 1.00th=[ 101], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.938 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.938 | 70.00th=[ 207], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.938 | 99.00th=[ 247], 99.50th=[ 288], 99.90th=[ 330], 99.95th=[ 330], 00:12:39.938 | 99.99th=[ 330] 00:12:39.938 bw ( KiB/s): min=17408, max=23040, per=3.33%, avg=20834.25, stdev=1850.45, samples=20 00:12:39.938 iops : min= 68, max= 90, avg=81.30, stdev= 7.24, samples=20 00:12:39.938 lat (msec) : 20=0.24%, 50=0.24%, 100=0.60%, 250=97.95%, 500=0.97% 00:12:39.938 cpu : usr=0.27%, sys=0.32%, ctx=837, majf=0, minf=1 00:12:39.938 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.938 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.938 issued rwts: total=0,829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.938 job23: (groupid=0, jobs=1): err= 0: pid=72656: Thu Jul 25 10:13:11 2024 00:12:39.938 write: IOPS=81, BW=20.3MiB/s (21.3MB/s)(207MiB/10168msec); 0 zone resets 00:12:39.938 slat (usec): min=22, max=671, avg=45.66, stdev=29.43 00:12:39.938 clat (msec): min=18, max=330, avg=196.70, stdev=26.17 00:12:39.938 lat (msec): min=18, max=330, avg=196.74, stdev=26.17 00:12:39.938 clat percentiles (msec): 00:12:39.938 | 1.00th=[ 108], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.938 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.938 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.938 | 99.00th=[ 247], 99.50th=[ 288], 99.90th=[ 330], 99.95th=[ 330], 00:12:39.938 | 99.99th=[ 330] 00:12:39.938 bw ( KiB/s): min=17408, max=23040, per=3.32%, avg=20756.85, stdev=1927.11, samples=20 00:12:39.938 iops : min= 68, max= 90, avg=80.95, stdev= 7.47, samples=20 00:12:39.938 lat (msec) : 20=0.12%, 50=0.24%, 100=0.61%, 250=98.06%, 500=0.97% 00:12:39.938 cpu : usr=0.14%, sys=0.32%, ctx=850, majf=0, minf=1 00:12:39.938 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.938 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.938 issued rwts: total=0,826,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.938 job24: (groupid=0, jobs=1): err= 0: pid=72657: Thu Jul 25 10:13:11 2024 00:12:39.938 write: IOPS=81, BW=20.4MiB/s (21.4MB/s)(207MiB/10162msec); 0 zone resets 00:12:39.938 slat (usec): min=22, max=182, avg=48.04, stdev=17.06 00:12:39.938 clat (msec): min=20, max=323, avg=196.11, stdev=25.40 00:12:39.939 lat (msec): min=20, max=323, avg=196.16, stdev=25.40 00:12:39.939 clat percentiles (msec): 00:12:39.939 | 1.00th=[ 111], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 180], 00:12:39.939 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.939 | 70.00th=[ 205], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.939 | 99.00th=[ 247], 99.50th=[ 279], 99.90th=[ 326], 99.95th=[ 326], 00:12:39.939 | 99.99th=[ 326] 00:12:39.939 bw ( KiB/s): min=17408, max=23040, per=3.33%, avg=20810.35, stdev=1824.38, samples=20 00:12:39.939 iops : min= 68, max= 90, avg=81.20, stdev= 7.02, samples=20 00:12:39.939 lat (msec) : 50=0.36%, 100=0.60%, 250=98.19%, 500=0.85% 00:12:39.939 cpu : usr=0.29%, sys=0.24%, ctx=847, majf=0, minf=1 00:12:39.939 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.939 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.939 issued rwts: total=0,828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.939 job25: (groupid=0, jobs=1): err= 0: pid=72659: Thu Jul 25 10:13:11 2024 00:12:39.939 write: IOPS=81, BW=20.4MiB/s (21.3MB/s)(207MiB/10171msec); 0 zone resets 00:12:39.939 slat (usec): min=24, max=3862, avg=50.55, stdev=134.48 00:12:39.939 clat (msec): min=3, max=333, avg=196.20, stdev=27.56 00:12:39.939 lat (msec): min=7, max=333, avg=196.25, stdev=27.53 00:12:39.939 clat percentiles (msec): 00:12:39.939 | 1.00th=[ 89], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.939 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.939 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 232], 95.00th=[ 241], 00:12:39.939 | 99.00th=[ 247], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:12:39.939 | 99.99th=[ 334] 00:12:39.939 bw ( KiB/s): min=17408, max=23040, per=3.32%, avg=20806.65, stdev=1882.05, samples=20 00:12:39.939 iops : min= 68, max= 90, avg=81.15, stdev= 7.39, samples=20 00:12:39.939 lat (msec) : 4=0.12%, 20=0.12%, 50=0.36%, 100=0.48%, 250=97.95% 00:12:39.939 lat (msec) : 500=0.97% 00:12:39.939 cpu : usr=0.22%, sys=0.24%, ctx=863, majf=0, minf=1 00:12:39.939 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.939 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.939 issued rwts: total=0,828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.939 job26: (groupid=0, jobs=1): err= 0: pid=72663: Thu Jul 25 10:13:11 2024 00:12:39.939 write: IOPS=81, BW=20.3MiB/s (21.3MB/s)(207MiB/10161msec); 0 zone resets 00:12:39.939 slat (usec): min=16, max=393, avg=46.49, stdev=20.31 00:12:39.939 clat (msec): min=22, max=323, avg=196.57, stdev=25.52 00:12:39.939 lat (msec): min=22, max=323, avg=196.62, stdev=25.52 00:12:39.939 clat percentiles (msec): 00:12:39.939 | 1.00th=[ 112], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 180], 00:12:39.939 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 205], 00:12:39.939 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.939 | 99.00th=[ 247], 99.50th=[ 279], 99.90th=[ 326], 99.95th=[ 326], 00:12:39.939 | 99.99th=[ 326] 00:12:39.939 bw ( KiB/s): min=17408, max=22994, per=3.32%, avg=20759.15, stdev=1818.24, samples=20 00:12:39.939 iops : min= 68, max= 89, avg=81.00, stdev= 7.00, samples=20 00:12:39.939 lat (msec) : 50=0.36%, 100=0.48%, 250=98.31%, 500=0.85% 00:12:39.939 cpu : usr=0.23%, sys=0.27%, ctx=836, majf=0, minf=1 00:12:39.939 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.939 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.939 issued rwts: total=0,826,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.939 job27: (groupid=0, jobs=1): err= 0: pid=72664: Thu Jul 25 10:13:11 2024 00:12:39.939 write: IOPS=81, BW=20.4MiB/s (21.3MB/s)(207MiB/10170msec); 0 zone resets 00:12:39.939 slat (usec): min=26, max=255, avg=52.97, stdev=21.18 00:12:39.939 clat (msec): min=7, max=333, avg=196.24, stdev=27.28 00:12:39.939 lat (msec): min=7, max=333, avg=196.29, stdev=27.28 00:12:39.939 clat percentiles (msec): 00:12:39.939 | 1.00th=[ 91], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 180], 00:12:39.939 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.939 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.939 | 99.00th=[ 247], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:12:39.939 | 99.99th=[ 334] 00:12:39.939 bw ( KiB/s): min=17408, max=23040, per=3.32%, avg=20806.70, stdev=1867.84, samples=20 00:12:39.939 iops : min= 68, max= 90, avg=81.15, stdev= 7.34, samples=20 00:12:39.939 lat (msec) : 10=0.12%, 20=0.12%, 50=0.36%, 100=0.48%, 250=97.95% 00:12:39.939 lat (msec) : 500=0.97% 00:12:39.939 cpu : usr=0.20%, sys=0.38%, ctx=859, majf=0, minf=1 00:12:39.939 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.939 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.939 issued rwts: total=0,828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.939 job28: (groupid=0, jobs=1): err= 0: pid=72665: Thu Jul 25 10:13:11 2024 00:12:39.939 write: IOPS=81, BW=20.5MiB/s (21.5MB/s)(209MiB/10177msec); 0 zone resets 00:12:39.939 slat (usec): min=22, max=248, avg=48.44, stdev=20.26 00:12:39.939 clat (msec): min=2, max=333, avg=194.97, stdev=30.00 00:12:39.939 lat (msec): min=2, max=333, avg=195.01, stdev=30.00 00:12:39.939 clat percentiles (msec): 00:12:39.939 | 1.00th=[ 48], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 180], 00:12:39.939 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.939 | 70.00th=[ 205], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 241], 00:12:39.939 | 99.00th=[ 247], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:12:39.939 | 99.99th=[ 334] 00:12:39.939 bw ( KiB/s): min=17408, max=24576, per=3.35%, avg=20957.90, stdev=2008.71, samples=20 00:12:39.939 iops : min= 68, max= 96, avg=81.70, stdev= 7.83, samples=20 00:12:39.939 lat (msec) : 4=0.24%, 10=0.12%, 20=0.36%, 50=0.36%, 100=0.48% 00:12:39.939 lat (msec) : 250=97.48%, 500=0.96% 00:12:39.939 cpu : usr=0.25%, sys=0.29%, ctx=856, majf=0, minf=1 00:12:39.939 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.939 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.939 issued rwts: total=0,834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.939 job29: (groupid=0, jobs=1): err= 0: pid=72666: Thu Jul 25 10:13:11 2024 00:12:39.939 write: IOPS=81, BW=20.4MiB/s (21.3MB/s)(207MiB/10171msec); 0 zone resets 00:12:39.939 slat (usec): min=25, max=120, avg=53.21, stdev=14.02 00:12:39.939 clat (msec): min=8, max=333, avg=196.27, stdev=27.43 00:12:39.939 lat (msec): min=8, max=333, avg=196.32, stdev=27.43 00:12:39.939 clat percentiles (msec): 00:12:39.939 | 1.00th=[ 89], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 180], 00:12:39.939 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 205], 00:12:39.939 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 232], 95.00th=[ 241], 00:12:39.939 | 99.00th=[ 247], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:12:39.939 | 99.99th=[ 334] 00:12:39.939 bw ( KiB/s): min=17408, max=23040, per=3.32%, avg=20806.70, stdev=1933.18, samples=20 00:12:39.939 iops : min= 68, max= 90, avg=81.15, stdev= 7.60, samples=20 00:12:39.939 lat (msec) : 10=0.12%, 20=0.12%, 50=0.36%, 100=0.48%, 250=97.95% 00:12:39.939 lat (msec) : 500=0.97% 00:12:39.939 cpu : usr=0.21%, sys=0.39%, ctx=829, majf=0, minf=1 00:12:39.939 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=98.2%, 32=0.0%, >=64=0.0% 00:12:39.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.939 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.939 issued rwts: total=0,828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:12:39.939 00:12:39.939 Run status group 0 (all jobs): 00:12:39.939 WRITE: bw=611MiB/s (641MB/s), 20.3MiB/s-20.6MiB/s (21.3MB/s-21.6MB/s), io=6220MiB (6522MB), run=10159-10177msec 00:12:39.939 00:12:39.939 Disk stats (read/write): 00:12:39.939 sda: ios=13/817, merge=0/0, ticks=8/159132, in_queue=159140, util=94.16% 00:12:39.939 sdb: ios=48/827, merge=0/0, ticks=165/159535, in_queue=159700, util=95.53% 00:12:39.939 sdc: ios=38/821, merge=0/0, ticks=158/159415, in_queue=159574, util=95.64% 00:12:39.939 sdd: ios=21/815, merge=0/0, ticks=88/158803, in_queue=158890, util=94.74% 00:12:39.939 sde: ios=28/816, merge=0/0, ticks=139/159078, in_queue=159217, util=95.52% 00:12:39.939 sdf: ios=21/816, merge=0/0, ticks=108/159051, in_queue=159159, util=95.16% 00:12:39.939 sdg: ios=23/815, merge=0/0, ticks=120/158970, in_queue=159090, util=95.37% 00:12:39.939 sdh: ios=0/816, merge=0/0, ticks=0/159131, in_queue=159131, util=95.53% 00:12:39.939 sdi: ios=0/815, merge=0/0, ticks=0/159200, in_queue=159200, util=95.72% 00:12:39.939 sdj: ios=0/815, merge=0/0, ticks=0/158875, in_queue=158875, util=95.93% 00:12:39.939 sdk: ios=0/816, merge=0/0, ticks=0/159168, in_queue=159169, util=96.12% 00:12:39.939 sdl: ios=0/816, merge=0/0, ticks=0/159012, in_queue=159012, util=96.36% 00:12:39.939 sdm: ios=0/825, merge=0/0, ticks=0/159381, in_queue=159381, util=96.67% 00:12:39.939 sdn: ios=0/816, merge=0/0, ticks=0/159109, in_queue=159108, util=96.70% 00:12:39.939 sdo: ios=0/815, merge=0/0, ticks=0/159063, in_queue=159063, util=96.72% 00:12:39.940 sdp: ios=0/827, merge=0/0, ticks=0/159612, in_queue=159612, util=97.26% 00:12:39.940 sdq: ios=0/816, merge=0/0, ticks=0/159047, in_queue=159047, util=97.04% 00:12:39.940 sdr: ios=0/815, merge=0/0, ticks=0/158910, in_queue=158910, util=97.24% 00:12:39.940 sds: ios=0/816, merge=0/0, ticks=0/159206, in_queue=159205, util=97.40% 00:12:39.940 sdt: ios=0/816, merge=0/0, ticks=0/159124, in_queue=159124, util=97.68% 00:12:39.940 sdu: ios=0/814, merge=0/0, ticks=0/158998, in_queue=158997, util=97.62% 00:12:39.940 sdv: ios=0/817, merge=0/0, ticks=0/159212, in_queue=159213, util=97.99% 00:12:39.940 sdw: ios=0/817, merge=0/0, ticks=0/159275, in_queue=159274, util=97.99% 00:12:39.940 sdx: ios=0/814, merge=0/0, ticks=0/159138, in_queue=159139, util=98.08% 00:12:39.940 sdy: ios=0/815, merge=0/0, ticks=0/158991, in_queue=158991, util=98.03% 00:12:39.940 sdz: ios=0/817, merge=0/0, ticks=0/159231, in_queue=159231, util=98.22% 00:12:39.940 sdaa: ios=0/813, merge=0/0, ticks=0/158977, in_queue=158977, util=98.21% 00:12:39.940 sdab: ios=0/817, merge=0/0, ticks=0/159307, in_queue=159307, util=98.49% 00:12:39.940 sdac: ios=0/823, merge=0/0, ticks=0/159423, in_queue=159423, util=98.70% 00:12:39.940 sdad: ios=0/817, merge=0/0, ticks=0/159361, in_queue=159361, util=98.99% 00:12:39.940 [2024-07-25 10:13:11.556490] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.940 [2024-07-25 10:13:11.561191] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.940 [2024-07-25 10:13:11.568451] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.940 [2024-07-25 10:13:11.572215] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.940 [2024-07-25 10:13:11.578283] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.940 10:13:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@79 -- # sync 00:12:39.940 [2024-07-25 10:13:11.584397] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.940 [2024-07-25 10:13:11.588862] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.940 [2024-07-25 10:13:11.591634] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.940 10:13:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:12:39.940 10:13:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@83 -- # rm -f 00:12:39.940 [2024-07-25 10:13:11.596594] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.940 10:13:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@84 -- # iscsicleanup 00:12:39.940 [2024-07-25 10:13:11.598760] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.940 Cleaning up iSCSI connection 00:12:39.940 10:13:11 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:12:39.940 10:13:11 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:12:39.940 [2024-07-25 10:13:11.600974] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.940 [2024-07-25 10:13:11.604278] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.940 [2024-07-25 10:13:11.606617] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.940 [2024-07-25 10:13:11.608825] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.940 [2024-07-25 10:13:11.611080] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:39.940 Logging out of session [sid: 33, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 34, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 35, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 36, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 37, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 38, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 39, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 40, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 41, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 42, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 43, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 44, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 45, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 46, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 47, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 48, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 49, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 50, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 51, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 52, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 53, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 54, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 55, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 56, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 57, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 58, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 59, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 60, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 61, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:12:39.940 Logging out of session [sid: 62, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:12:39.940 Logout of [sid: 33, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:12:39.940 Logout of [sid: 34, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:12:39.940 Logout of [sid: 35, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:39.940 Logout of [sid: 36, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:12:39.940 Logout of [sid: 37, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:12:39.940 Logout of [sid: 38, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:12:39.940 Logout of [sid: 39, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:12:39.940 Logout of [sid: 40, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:12:39.940 Logout of [sid: 41, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:12:39.940 Logout of [sid: 42, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:12:39.940 Logout of [sid: 43, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:12:39.940 Logout of [sid: 44, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:12:39.940 Logout of [sid: 45, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 46, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 47, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 48, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 49, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 50, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 51, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 52, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 53, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 54, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 55, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 56, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 57, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 58, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 59, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 60, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 61, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:12:39.941 Logout of [sid: 62, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:12:39.941 10:13:12 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:12:39.941 10:13:12 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@983 -- # rm -rf 00:12:39.941 10:13:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@85 -- # remove_backends 00:12:39.941 INFO: Removing lvol bdevs 00:12:39.941 10:13:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@22 -- # echo 'INFO: Removing lvol bdevs' 00:12:39.941 10:13:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # seq 1 30 00:12:39.941 10:13:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:39.941 10:13:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_1 00:12:39.941 10:13:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_1 00:12:39.941 [2024-07-25 10:13:12.750001] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (36ded4b7-b206-4ea3-b715-5b58a4001257) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:39.941 INFO: lvol bdev lvs0/lbd_1 removed 00:12:39.941 10:13:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_1 removed' 00:12:39.941 10:13:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:39.941 10:13:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_2 00:12:39.941 10:13:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_2 00:12:39.941 [2024-07-25 10:13:13.030094] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b3a75bb9-7785-4175-93b9-94047ab2f6db) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:39.941 INFO: lvol bdev lvs0/lbd_2 removed 00:12:39.941 10:13:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_2 removed' 00:12:39.941 10:13:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:39.941 10:13:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_3 00:12:39.941 10:13:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_3 00:12:40.199 [2024-07-25 10:13:13.266166] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (181679de-c53c-4c41-b2f3-b55949fd038c) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:40.199 INFO: lvol bdev lvs0/lbd_3 removed 00:12:40.199 10:13:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_3 removed' 00:12:40.199 10:13:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:40.199 10:13:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_4 00:12:40.199 10:13:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_4 00:12:40.485 [2024-07-25 10:13:13.522241] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (1b99800c-75fc-4600-9e8f-0a7050598386) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:40.485 INFO: lvol bdev lvs0/lbd_4 removed 00:12:40.485 10:13:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_4 removed' 00:12:40.485 10:13:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:40.485 10:13:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_5 00:12:40.485 10:13:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_5 00:12:40.762 [2024-07-25 10:13:13.758467] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (ebce15a2-7adf-4d47-97e0-296f89aa2149) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:40.762 INFO: lvol bdev lvs0/lbd_5 removed 00:12:40.762 10:13:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_5 removed' 00:12:40.762 10:13:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:40.762 10:13:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_6 00:12:40.762 10:13:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_6 00:12:40.762 [2024-07-25 10:13:14.014486] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (3ed1ecfc-8fea-4fd9-a44b-dba0a39b4b84) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:41.021 INFO: lvol bdev lvs0/lbd_6 removed 00:12:41.021 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_6 removed' 00:12:41.021 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:41.021 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_7 00:12:41.021 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_7 00:12:41.021 [2024-07-25 10:13:14.190538] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b09ecda2-a795-4f67-b323-29f5bac50eb6) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:41.021 INFO: lvol bdev lvs0/lbd_7 removed 00:12:41.021 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_7 removed' 00:12:41.021 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:41.021 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_8 00:12:41.021 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_8 00:12:41.278 [2024-07-25 10:13:14.370589] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (d5e8a5dc-8cb2-4541-9860-545bde0b1856) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:41.278 INFO: lvol bdev lvs0/lbd_8 removed 00:12:41.278 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_8 removed' 00:12:41.278 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:41.278 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_9 00:12:41.278 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_9 00:12:41.536 [2024-07-25 10:13:14.554656] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (722768af-8822-46d5-8663-bbe434be7169) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:41.536 INFO: lvol bdev lvs0/lbd_9 removed 00:12:41.536 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_9 removed' 00:12:41.536 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:41.536 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_10 00:12:41.536 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_10 00:12:41.795 [2024-07-25 10:13:14.810798] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (4174ccb5-d08d-473a-b888-4e7fda8edadb) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:41.795 INFO: lvol bdev lvs0/lbd_10 removed 00:12:41.795 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_10 removed' 00:12:41.795 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:41.795 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_11 00:12:41.795 10:13:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_11 00:12:41.795 [2024-07-25 10:13:14.998856] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0ce0a2f5-ae9a-403f-b404-f406e0312aa4) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:41.795 INFO: lvol bdev lvs0/lbd_11 removed 00:12:41.795 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_11 removed' 00:12:41.795 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:41.795 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_12 00:12:41.795 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_12 00:12:42.053 [2024-07-25 10:13:15.194918] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0ef9acba-c909-4bea-a453-e03a180d57ca) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:42.053 INFO: lvol bdev lvs0/lbd_12 removed 00:12:42.053 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_12 removed' 00:12:42.053 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:42.053 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_13 00:12:42.053 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_13 00:12:42.312 [2024-07-25 10:13:15.378979] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (6d224d8f-bd04-4cb9-8eaf-66c68b832f94) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:42.312 INFO: lvol bdev lvs0/lbd_13 removed 00:12:42.312 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_13 removed' 00:12:42.312 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:42.312 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_14 00:12:42.312 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_14 00:12:42.570 [2024-07-25 10:13:15.631061] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (345c5592-c558-4d58-a4d9-fe6e0c974397) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:42.570 INFO: lvol bdev lvs0/lbd_14 removed 00:12:42.570 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_14 removed' 00:12:42.570 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:42.570 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_15 00:12:42.570 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_15 00:12:42.828 [2024-07-25 10:13:15.895153] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (71363de9-2f48-431e-875c-51c216f2bd7e) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:42.828 INFO: lvol bdev lvs0/lbd_15 removed 00:12:42.828 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_15 removed' 00:12:42.828 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:42.828 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_16 00:12:42.828 10:13:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_16 00:12:43.086 [2024-07-25 10:13:16.107227] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (cb001cbd-7806-42a6-a9ce-ec8814f42991) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:43.086 INFO: lvol bdev lvs0/lbd_16 removed 00:12:43.086 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_16 removed' 00:12:43.086 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:43.086 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_17 00:12:43.086 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_17 00:12:43.344 [2024-07-25 10:13:16.355325] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b5be91b7-92d2-49b1-ab08-1cca276f4e16) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:43.344 INFO: lvol bdev lvs0/lbd_17 removed 00:12:43.344 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_17 removed' 00:12:43.344 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:43.344 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_18 00:12:43.344 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_18 00:12:43.344 [2024-07-25 10:13:16.555405] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (697b28c8-775b-43fd-996d-d07671ccee3e) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:43.344 INFO: lvol bdev lvs0/lbd_18 removed 00:12:43.344 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_18 removed' 00:12:43.344 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:43.344 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_19 00:12:43.344 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_19 00:12:43.601 [2024-07-25 10:13:16.751494] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (d2becb7d-4f33-45d2-b46d-a86f15c0e186) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:43.601 INFO: lvol bdev lvs0/lbd_19 removed 00:12:43.601 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_19 removed' 00:12:43.601 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:43.601 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_20 00:12:43.601 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_20 00:12:43.889 [2024-07-25 10:13:16.951554] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (2d3d7636-fa4e-4da2-8386-4f1937164e47) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:43.889 INFO: lvol bdev lvs0/lbd_20 removed 00:12:43.889 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_20 removed' 00:12:43.889 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:43.889 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_21 00:12:43.889 10:13:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_21 00:12:44.177 [2024-07-25 10:13:17.191646] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (7b286e2f-60b9-400c-9268-a5dca2381761) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:44.177 INFO: lvol bdev lvs0/lbd_21 removed 00:12:44.177 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_21 removed' 00:12:44.177 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:44.177 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_22 00:12:44.177 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_22 00:12:44.177 [2024-07-25 10:13:17.371699] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0d4f76c9-2310-421c-9591-3ba47edf4fef) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:44.177 INFO: lvol bdev lvs0/lbd_22 removed 00:12:44.177 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_22 removed' 00:12:44.177 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:44.177 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_23 00:12:44.177 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_23 00:12:44.435 [2024-07-25 10:13:17.559787] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (5e1cb099-3e6f-4f61-84b1-8df4ba4758d8) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:44.435 INFO: lvol bdev lvs0/lbd_23 removed 00:12:44.435 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_23 removed' 00:12:44.435 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:44.435 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_24 00:12:44.435 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_24 00:12:44.693 [2024-07-25 10:13:17.731845] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (ae212346-85f1-4dfb-8af9-fa09ae75de66) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:44.693 INFO: lvol bdev lvs0/lbd_24 removed 00:12:44.693 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_24 removed' 00:12:44.693 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:44.693 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_25 00:12:44.693 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_25 00:12:44.693 [2024-07-25 10:13:17.903896] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (03c7d872-3211-4b96-948b-e63af869f020) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:44.693 INFO: lvol bdev lvs0/lbd_25 removed 00:12:44.693 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_25 removed' 00:12:44.693 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:44.693 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_26 00:12:44.693 10:13:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_26 00:12:44.951 [2024-07-25 10:13:18.079957] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (7b2a1e94-7768-43e0-95d5-c0f59f12b91a) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:44.951 INFO: lvol bdev lvs0/lbd_26 removed 00:12:44.951 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_26 removed' 00:12:44.951 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:44.951 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_27 00:12:44.951 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_27 00:12:45.208 [2024-07-25 10:13:18.260055] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (4de600fc-165d-4d53-b125-ff1856d7576b) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:45.208 INFO: lvol bdev lvs0/lbd_27 removed 00:12:45.208 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_27 removed' 00:12:45.208 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:45.208 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_28 00:12:45.208 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_28 00:12:45.208 [2024-07-25 10:13:18.456105] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (41e4bb56-e027-4ec2-af5a-a295c7756c44) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:45.466 INFO: lvol bdev lvs0/lbd_28 removed 00:12:45.466 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_28 removed' 00:12:45.466 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:45.466 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_29 00:12:45.466 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_29 00:12:45.466 [2024-07-25 10:13:18.696197] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (d40e3156-fb50-4dc8-9c0a-400833f664d0) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:45.466 INFO: lvol bdev lvs0/lbd_29 removed 00:12:45.466 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_29 removed' 00:12:45.466 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:45.466 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_30 00:12:45.466 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_30 00:12:45.723 [2024-07-25 10:13:18.872254] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (5a97a231-2cd3-476e-a14e-9622706994a8) received event(SPDK_BDEV_EVENT_REMOVE) 00:12:45.723 INFO: lvol bdev lvs0/lbd_30 removed 00:12:45.723 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_30 removed' 00:12:45.723 10:13:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@28 -- # sleep 1 00:12:46.658 INFO: Removing lvol stores 00:12:46.658 10:13:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@30 -- # echo 'INFO: Removing lvol stores' 00:12:46.658 10:13:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:12:46.919 INFO: lvol store lvs0 removed 00:12:46.919 INFO: Removing NVMe 00:12:46.919 10:13:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@32 -- # echo 'INFO: lvol store lvs0 removed' 00:12:46.919 10:13:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@34 -- # echo 'INFO: Removing NVMe' 00:12:46.919 10:13:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:48.292 10:13:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@37 -- # return 0 00:12:48.292 10:13:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@86 -- # killprocess 70787 00:12:48.292 10:13:21 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 70787 ']' 00:12:48.292 10:13:21 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@952 -- # kill -0 70787 00:12:48.292 10:13:21 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@953 -- # uname 00:12:48.292 10:13:21 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:48.292 10:13:21 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70787 00:12:48.292 killing process with pid 70787 00:12:48.292 10:13:21 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:48.292 10:13:21 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:48.292 10:13:21 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70787' 00:12:48.292 10:13:21 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@967 -- # kill 70787 00:12:48.292 10:13:21 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@972 -- # wait 70787 00:12:48.859 10:13:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@87 -- # iscsitestfini 00:12:48.859 10:13:21 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:12:48.859 00:12:48.859 real 0m47.950s 00:12:48.859 user 0m57.432s 00:12:48.859 sys 0m14.133s 00:12:48.859 10:13:21 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:48.859 10:13:21 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:12:48.859 ************************************ 00:12:48.859 END TEST iscsi_tgt_multiconnection 00:12:48.859 ************************************ 00:12:48.859 10:13:21 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:12:48.859 10:13:21 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@46 -- # '[' 0 -eq 1 ']' 00:12:48.859 10:13:21 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@49 -- # '[' 1 -eq 1 ']' 00:12:48.859 10:13:21 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@50 -- # hash ceph 00:12:48.859 10:13:21 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@54 -- # run_test iscsi_tgt_rbd /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd/rbd.sh 00:12:48.859 10:13:21 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:48.859 10:13:21 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:48.859 10:13:21 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:48.859 ************************************ 00:12:48.859 START TEST iscsi_tgt_rbd 00:12:48.859 ************************************ 00:12:48.859 10:13:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd/rbd.sh 00:12:48.859 * Looking for test storage... 00:12:48.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@11 -- # iscsitestinit 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@13 -- # timing_enter rbd_setup 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@14 -- # rbd_setup 10.0.0.1 spdk_iscsi_ns 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1005 -- # '[' -z 10.0.0.1 ']' 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1009 -- # '[' -n spdk_iscsi_ns ']' 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1010 -- # grep spdk_iscsi_ns 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1010 -- # ip netns list 00:12:48.859 spdk_iscsi_ns (id: 0) 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1011 -- # NS_CMD='ip netns exec spdk_iscsi_ns' 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1018 -- # hash ceph 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1019 -- # export PG_NUM=128 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1019 -- # PG_NUM=128 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1020 -- # export RBD_POOL=rbd 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1020 -- # RBD_POOL=rbd 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1021 -- # export RBD_NAME=foo 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1021 -- # RBD_NAME=foo 00:12:48.859 10:13:22 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1022 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:12:48.859 + base_dir=/var/tmp/ceph 00:12:48.859 + image=/var/tmp/ceph/ceph_raw.img 00:12:48.859 + dev=/dev/loop200 00:12:48.859 + pkill -9 ceph 00:12:48.859 + sleep 3 00:12:52.141 + umount /dev/loop200p2 00:12:52.141 umount: /dev/loop200p2: no mount point specified. 00:12:52.141 + losetup -d /dev/loop200 00:12:52.141 losetup: /dev/loop200: failed to use device: No such device 00:12:52.141 + rm -rf /var/tmp/ceph 00:12:52.141 10:13:25 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1023 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 10.0.0.1 00:12:52.141 + set -e 00:12:52.141 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:12:52.141 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:12:52.141 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:12:52.141 + base_dir=/var/tmp/ceph 00:12:52.141 + mon_ip=10.0.0.1 00:12:52.141 + mon_dir=/var/tmp/ceph/mon.a 00:12:52.141 + pid_dir=/var/tmp/ceph/pid 00:12:52.141 + ceph_conf=/var/tmp/ceph/ceph.conf 00:12:52.141 + mnt_dir=/var/tmp/ceph/mnt 00:12:52.141 + image=/var/tmp/ceph_raw.img 00:12:52.141 + dev=/dev/loop200 00:12:52.141 + modprobe loop 00:12:52.141 + umount /dev/loop200p2 00:12:52.141 umount: /dev/loop200p2: no mount point specified. 00:12:52.141 + true 00:12:52.141 + losetup -d /dev/loop200 00:12:52.141 losetup: /dev/loop200: failed to use device: No such device 00:12:52.141 + true 00:12:52.141 + '[' -d /var/tmp/ceph ']' 00:12:52.141 + mkdir /var/tmp/ceph 00:12:52.141 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:12:52.141 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:12:52.141 + fallocate -l 4G /var/tmp/ceph_raw.img 00:12:52.141 + mknod /dev/loop200 b 7 200 00:12:52.141 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:12:52.141 + PARTED='parted -s' 00:12:52.141 + SGDISK=sgdisk 00:12:52.141 Partitioning /dev/loop200 00:12:52.141 + echo 'Partitioning /dev/loop200' 00:12:52.141 + parted -s /dev/loop200 mktable gpt 00:12:52.141 + sleep 2 00:12:54.042 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:12:54.042 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:12:54.042 Setting name on /dev/loop200 00:12:54.042 + partno=0 00:12:54.042 + echo 'Setting name on /dev/loop200' 00:12:54.042 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:12:55.427 Warning: The kernel is still using the old partition table. 00:12:55.427 The new table will be used at the next reboot or after you 00:12:55.427 run partprobe(8) or kpartx(8) 00:12:55.427 The operation has completed successfully. 00:12:55.427 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:12:56.361 Warning: The kernel is still using the old partition table. 00:12:56.361 The new table will be used at the next reboot or after you 00:12:56.361 run partprobe(8) or kpartx(8) 00:12:56.361 The operation has completed successfully. 00:12:56.361 + kpartx /dev/loop200 00:12:56.361 loop200p1 : 0 4192256 /dev/loop200 2048 00:12:56.361 loop200p2 : 0 4192256 /dev/loop200 4194304 00:12:56.361 ++ ceph -v 00:12:56.361 ++ awk '{print $3}' 00:12:56.361 + ceph_version=17.2.7 00:12:56.361 + ceph_maj=17 00:12:56.361 + '[' 17 -gt 12 ']' 00:12:56.361 + update_config=true 00:12:56.361 + rm -f /var/log/ceph/ceph-mon.a.log 00:12:56.361 + set_min_mon_release='--set-min-mon-release 14' 00:12:56.361 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:12:56.361 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:12:56.361 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:12:56.361 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:12:56.361 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:12:56.361 = sectsz=512 attr=2, projid32bit=1 00:12:56.361 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:56.361 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:56.361 data = bsize=4096 blocks=524032, imaxpct=25 00:12:56.361 = sunit=0 swidth=0 blks 00:12:56.361 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:56.361 log =internal log bsize=4096 blocks=16384, version=2 00:12:56.361 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:56.361 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:56.361 Discarding blocks...Done. 00:12:56.361 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:12:56.361 + cat 00:12:56.361 + rm -rf '/var/tmp/ceph/mon.a/*' 00:12:56.361 + mkdir -p /var/tmp/ceph/mon.a 00:12:56.361 + mkdir -p /var/tmp/ceph/pid 00:12:56.361 + rm -f /etc/ceph/ceph.client.admin.keyring 00:12:56.361 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:12:56.361 creating /var/tmp/ceph/keyring 00:12:56.361 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:12:56.361 + monmaptool --create --clobber --add a 10.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:12:56.361 monmaptool: monmap file /var/tmp/ceph/monmap 00:12:56.361 monmaptool: generated fsid 85054b9d-2146-4a80-8cec-e5eef780b56b 00:12:56.361 setting min_mon_release = octopus 00:12:56.361 epoch 0 00:12:56.361 fsid 85054b9d-2146-4a80-8cec-e5eef780b56b 00:12:56.361 last_changed 2024-07-25T10:13:29.598589+0000 00:12:56.361 created 2024-07-25T10:13:29.598589+0000 00:12:56.361 min_mon_release 15 (octopus) 00:12:56.361 election_strategy: 1 00:12:56.361 0: v2:10.0.0.1:12046/0 mon.a 00:12:56.361 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:12:56.361 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:12:56.618 + '[' true = true ']' 00:12:56.618 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:12:56.618 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:12:56.618 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:12:56.618 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:12:56.618 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:12:56.618 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:12:56.618 ++ hostname 00:12:56.618 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:12:56.618 + true 00:12:56.618 + '[' true = true ']' 00:12:56.618 + ceph-conf --name mon.a --show-config-value log_file 00:12:56.875 /var/log/ceph/ceph-mon.a.log 00:12:56.875 ++ ceph -s 00:12:56.875 ++ grep id 00:12:56.875 ++ awk '{print $2}' 00:12:57.132 + fsid=85054b9d-2146-4a80-8cec-e5eef780b56b 00:12:57.132 + sed -i 's/perf = true/perf = true\n\tfsid = 85054b9d-2146-4a80-8cec-e5eef780b56b \n/g' /var/tmp/ceph/ceph.conf 00:12:57.132 + (( ceph_maj < 18 )) 00:12:57.132 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:12:57.132 + cat /var/tmp/ceph/ceph.conf 00:12:57.132 [global] 00:12:57.132 debug_lockdep = 0/0 00:12:57.132 debug_context = 0/0 00:12:57.132 debug_crush = 0/0 00:12:57.132 debug_buffer = 0/0 00:12:57.132 debug_timer = 0/0 00:12:57.132 debug_filer = 0/0 00:12:57.132 debug_objecter = 0/0 00:12:57.132 debug_rados = 0/0 00:12:57.132 debug_rbd = 0/0 00:12:57.132 debug_ms = 0/0 00:12:57.132 debug_monc = 0/0 00:12:57.132 debug_tp = 0/0 00:12:57.132 debug_auth = 0/0 00:12:57.132 debug_finisher = 0/0 00:12:57.132 debug_heartbeatmap = 0/0 00:12:57.132 debug_perfcounter = 0/0 00:12:57.132 debug_asok = 0/0 00:12:57.132 debug_throttle = 0/0 00:12:57.132 debug_mon = 0/0 00:12:57.132 debug_paxos = 0/0 00:12:57.132 debug_rgw = 0/0 00:12:57.132 00:12:57.132 perf = true 00:12:57.132 osd objectstore = filestore 00:12:57.132 00:12:57.132 fsid = 85054b9d-2146-4a80-8cec-e5eef780b56b 00:12:57.132 00:12:57.132 mutex_perf_counter = false 00:12:57.132 throttler_perf_counter = false 00:12:57.132 rbd cache = false 00:12:57.132 mon_allow_pool_delete = true 00:12:57.132 00:12:57.132 osd_pool_default_size = 1 00:12:57.132 00:12:57.132 [mon] 00:12:57.132 mon_max_pool_pg_num=166496 00:12:57.132 mon_osd_max_split_count = 10000 00:12:57.132 mon_pg_warn_max_per_osd = 10000 00:12:57.132 00:12:57.132 [osd] 00:12:57.132 osd_op_threads = 64 00:12:57.132 filestore_queue_max_ops=5000 00:12:57.132 filestore_queue_committing_max_ops=5000 00:12:57.132 journal_max_write_entries=1000 00:12:57.132 journal_queue_max_ops=3000 00:12:57.132 objecter_inflight_ops=102400 00:12:57.132 filestore_wbthrottle_enable=false 00:12:57.132 filestore_queue_max_bytes=1048576000 00:12:57.132 filestore_queue_committing_max_bytes=1048576000 00:12:57.132 journal_max_write_bytes=1048576000 00:12:57.132 journal_queue_max_bytes=1048576000 00:12:57.132 ms_dispatch_throttle_bytes=1048576000 00:12:57.132 objecter_inflight_op_bytes=1048576000 00:12:57.132 filestore_max_sync_interval=10 00:12:57.133 osd_client_message_size_cap = 0 00:12:57.133 osd_client_message_cap = 0 00:12:57.133 osd_enable_op_tracker = false 00:12:57.133 filestore_fd_cache_size = 10240 00:12:57.133 filestore_fd_cache_shards = 64 00:12:57.133 filestore_op_threads = 16 00:12:57.133 osd_op_num_shards = 48 00:12:57.133 osd_op_num_threads_per_shard = 2 00:12:57.133 osd_pg_object_context_cache_count = 10240 00:12:57.133 filestore_odsync_write = True 00:12:57.133 journal_dynamic_throttle = True 00:12:57.133 00:12:57.133 [osd.0] 00:12:57.133 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:12:57.133 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:12:57.133 00:12:57.133 # add mon address 00:12:57.133 [mon.a] 00:12:57.133 mon addr = v2:10.0.0.1:12046 00:12:57.133 + i=0 00:12:57.133 + mkdir -p /var/tmp/ceph/mnt 00:12:57.133 ++ uuidgen 00:12:57.133 + uuid=762004a1-3a07-4511-99f0-e722f90fd160 00:12:57.133 + ceph -c /var/tmp/ceph/ceph.conf osd create 762004a1-3a07-4511-99f0-e722f90fd160 0 00:12:57.405 0 00:12:57.405 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid 762004a1-3a07-4511-99f0-e722f90fd160 --check-needs-journal --no-mon-config 00:12:57.405 2024-07-25T10:13:30.609+0000 7fcd64264400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:12:57.405 2024-07-25T10:13:30.609+0000 7fcd64264400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:12:57.405 2024-07-25T10:13:30.649+0000 7fcd64264400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 762004a1-3a07-4511-99f0-e722f90fd160, invalid (someone else's?) journal 00:12:57.727 2024-07-25T10:13:30.672+0000 7fcd64264400 -1 journal do_read_entry(4096): bad header magic 00:12:57.727 2024-07-25T10:13:30.672+0000 7fcd64264400 -1 journal do_read_entry(4096): bad header magic 00:12:57.727 ++ hostname 00:12:57.727 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:12:58.661 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:12:58.920 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:12:59.178 added key for osd.0 00:12:59.178 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:12:59.435 + class_dir=/lib64/rados-classes 00:12:59.435 + [[ -e /lib64/rados-classes ]] 00:12:59.435 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:12:59.693 + pkill -9 ceph-osd 00:12:59.693 + true 00:12:59.693 + sleep 2 00:13:02.217 + mkdir -p /var/tmp/ceph/pid 00:13:02.217 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:13:02.217 2024-07-25T10:13:34.989+0000 7f7837284400 -1 Falling back to public interface 00:13:02.217 2024-07-25T10:13:35.030+0000 7f7837284400 -1 journal do_read_entry(8192): bad header magic 00:13:02.217 2024-07-25T10:13:35.030+0000 7f7837284400 -1 journal do_read_entry(8192): bad header magic 00:13:02.217 2024-07-25T10:13:35.068+0000 7f7837284400 -1 osd.0 0 log_to_monitors true 00:13:02.783 10:13:35 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1025 -- # ip netns exec spdk_iscsi_ns ceph osd pool create rbd 128 00:13:03.718 pool 'rbd' created 00:13:03.718 10:13:36 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1026 -- # ip netns exec spdk_iscsi_ns rbd create foo --size 1000 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@15 -- # trap 'rbd_cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@16 -- # timing_exit rbd_setup 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@20 -- # timing_enter start_iscsi_tgt 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@23 -- # pid=74064 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@22 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@25 -- # trap 'killprocess $pid; rbd_cleanup; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@27 -- # waitforlisten 74064 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@829 -- # '[' -z 74064 ']' 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.984 10:13:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:08.984 [2024-07-25 10:13:42.225178] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:13:08.984 [2024-07-25 10:13:42.225294] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74064 ] 00:13:09.241 [2024-07-25 10:13:42.370311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.500 [2024-07-25 10:13:42.560098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.500 [2024-07-25 10:13:42.560265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.500 [2024-07-25 10:13:42.560463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.500 [2024-07-25 10:13:42.560627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.065 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:10.065 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@862 -- # return 0 00:13:10.065 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@28 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:13:10.065 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.065 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:10.065 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.065 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@29 -- # rpc_cmd framework_start_init 00:13:10.065 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.065 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:10.321 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.321 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@30 -- # echo 'iscsi_tgt is listening. Running tests...' 00:13:10.321 iscsi_tgt is listening. Running tests... 00:13:10.321 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@32 -- # timing_exit start_iscsi_tgt 00:13:10.321 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:10.321 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@34 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@35 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@36 -- # rpc_cmd bdev_rbd_register_cluster iscsi_rbd_cluster --key-file /etc/ceph/ceph.client.admin.keyring --config-file /etc/ceph/ceph.conf 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@36 -- # rbd_cluster_name=iscsi_rbd_cluster 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@37 -- # rpc_cmd bdev_rbd_get_clusters_info -b iscsi_rbd_cluster 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:10.578 { 00:13:10.578 "cluster_name": "iscsi_rbd_cluster", 00:13:10.578 "config_file": "/etc/ceph/ceph.conf", 00:13:10.578 "key_file": "/etc/ceph/ceph.client.admin.keyring" 00:13:10.578 } 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@38 -- # rpc_cmd bdev_rbd_create rbd foo 4096 -c iscsi_rbd_cluster 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:10.578 [2024-07-25 10:13:43.677485] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@38 -- # rbd_bdev=Ceph0 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@39 -- # rpc_cmd bdev_get_bdevs 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:10.578 [ 00:13:10.578 { 00:13:10.578 "name": "Ceph0", 00:13:10.578 "aliases": [ 00:13:10.578 "614b8dd5-ffb9-416f-a394-5835612812c0" 00:13:10.578 ], 00:13:10.578 "product_name": "Ceph Rbd Disk", 00:13:10.578 "block_size": 4096, 00:13:10.578 "num_blocks": 256000, 00:13:10.578 "uuid": "614b8dd5-ffb9-416f-a394-5835612812c0", 00:13:10.578 "assigned_rate_limits": { 00:13:10.578 "rw_ios_per_sec": 0, 00:13:10.578 "rw_mbytes_per_sec": 0, 00:13:10.578 "r_mbytes_per_sec": 0, 00:13:10.578 "w_mbytes_per_sec": 0 00:13:10.578 }, 00:13:10.578 "claimed": false, 00:13:10.578 "zoned": false, 00:13:10.578 "supported_io_types": { 00:13:10.578 "read": true, 00:13:10.578 "write": true, 00:13:10.578 "unmap": true, 00:13:10.578 "flush": true, 00:13:10.578 "reset": true, 00:13:10.578 "nvme_admin": false, 00:13:10.578 "nvme_io": false, 00:13:10.578 "nvme_io_md": false, 00:13:10.578 "write_zeroes": true, 00:13:10.578 "zcopy": false, 00:13:10.578 "get_zone_info": false, 00:13:10.578 "zone_management": false, 00:13:10.578 "zone_append": false, 00:13:10.578 "compare": false, 00:13:10.578 "compare_and_write": true, 00:13:10.578 "abort": false, 00:13:10.578 "seek_hole": false, 00:13:10.578 "seek_data": false, 00:13:10.578 "copy": false, 00:13:10.578 "nvme_iov_md": false 00:13:10.578 }, 00:13:10.578 "driver_specific": { 00:13:10.578 "rbd": { 00:13:10.578 "pool_name": "rbd", 00:13:10.578 "rbd_name": "foo", 00:13:10.578 "config_file": "/etc/ceph/ceph.conf", 00:13:10.578 "key_file": "/etc/ceph/ceph.client.admin.keyring" 00:13:10.578 } 00:13:10.578 } 00:13:10.578 } 00:13:10.578 ] 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@41 -- # rpc_cmd bdev_rbd_resize Ceph0 2000 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:10.578 true 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # rpc_cmd bdev_get_bdevs 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # sed 's/[^[:digit:]]//g' 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # grep num_blocks 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # num_block=512000 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@44 -- # total_size=2000 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@45 -- # '[' 2000 '!=' 2000 ']' 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@53 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Ceph0:0 1:2 64 -d 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.578 10:13:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@54 -- # sleep 1 00:13:11.567 10:13:44 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@56 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:13:11.567 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:13:11.567 10:13:44 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@57 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:13:11.567 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:13:11.567 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:13:11.567 10:13:44 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@58 -- # waitforiscsidevices 1 00:13:11.567 10:13:44 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@116 -- # local num=1 00:13:11.567 10:13:44 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:13:11.567 10:13:44 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:13:11.825 10:13:44 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:13:11.825 10:13:44 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:13:11.825 [2024-07-25 10:13:44.831417] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:11.825 10:13:44 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # n=1 00:13:11.825 10:13:44 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:13:11.825 10:13:44 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@123 -- # return 0 00:13:11.825 10:13:44 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@60 -- # trap 'iscsicleanup; killprocess $pid; rbd_cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:11.825 10:13:44 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:13:11.825 [global] 00:13:11.825 thread=1 00:13:11.825 invalidate=1 00:13:11.825 rw=randrw 00:13:11.825 time_based=1 00:13:11.825 runtime=1 00:13:11.825 ioengine=libaio 00:13:11.825 direct=1 00:13:11.825 bs=4096 00:13:11.825 iodepth=1 00:13:11.825 norandommap=0 00:13:11.825 numjobs=1 00:13:11.825 00:13:11.825 verify_dump=1 00:13:11.825 verify_backlog=512 00:13:11.825 verify_state_save=0 00:13:11.825 do_verify=1 00:13:11.825 verify=crc32c-intel 00:13:11.825 [job0] 00:13:11.825 filename=/dev/sda 00:13:11.825 queue_depth set to 113 (sda) 00:13:11.825 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:11.825 fio-3.35 00:13:11.825 Starting 1 thread 00:13:11.825 [2024-07-25 10:13:45.032725] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:13.198 [2024-07-25 10:13:46.454487] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:13.457 00:13:13.457 job0: (groupid=0, jobs=1): err= 0: pid=74184: Thu Jul 25 10:13:46 2024 00:13:13.457 read: IOPS=3, BW=15.3KiB/s (15.7kB/s)(20.0KiB/1308msec) 00:13:13.457 slat (nsec): min=12319, max=58455, avg=30692.00, stdev=22046.88 00:13:13.457 clat (usec): min=187, max=564, avg=332.61, stdev=152.94 00:13:13.457 lat (usec): min=200, max=623, avg=363.30, stdev=173.90 00:13:13.457 clat percentiles (usec): 00:13:13.457 | 1.00th=[ 188], 5.00th=[ 188], 10.00th=[ 188], 20.00th=[ 188], 00:13:13.457 | 30.00th=[ 223], 40.00th=[ 223], 50.00th=[ 289], 60.00th=[ 289], 00:13:13.457 | 70.00th=[ 400], 80.00th=[ 400], 90.00th=[ 562], 95.00th=[ 562], 00:13:13.457 | 99.00th=[ 562], 99.50th=[ 562], 99.90th=[ 562], 99.95th=[ 562], 00:13:13.457 | 99.99th=[ 562] 00:13:13.457 bw ( KiB/s): min= 40, max= 40, per=100.00%, avg=40.00, stdev= 0.00, samples=1 00:13:13.457 iops : min= 10, max= 10, avg=10.00, stdev= 0.00, samples=1 00:13:13.457 write: IOPS=2, BW=9394B/s (9394B/s)(12.0KiB/1308msec); 0 zone resets 00:13:13.457 slat (nsec): min=26745, max=39101, avg=32695.33, stdev=6190.57 00:13:13.457 clat (msec): min=128, max=969, avg=435.20, stdev=464.03 00:13:13.457 lat (msec): min=128, max=969, avg=435.23, stdev=464.03 00:13:13.457 clat percentiles (msec): 00:13:13.457 | 1.00th=[ 129], 5.00th=[ 129], 10.00th=[ 129], 20.00th=[ 129], 00:13:13.457 | 30.00th=[ 129], 40.00th=[ 207], 50.00th=[ 207], 60.00th=[ 207], 00:13:13.457 | 70.00th=[ 969], 80.00th=[ 969], 90.00th=[ 969], 95.00th=[ 969], 00:13:13.457 | 99.00th=[ 969], 99.50th=[ 969], 99.90th=[ 969], 99.95th=[ 969], 00:13:13.457 | 99.99th=[ 969] 00:13:13.457 bw ( KiB/s): min= 16, max= 16, per=100.00%, avg=16.00, stdev= 0.00, samples=1 00:13:13.457 iops : min= 4, max= 4, avg= 4.00, stdev= 0.00, samples=1 00:13:13.457 lat (usec) : 250=25.00%, 500=25.00%, 750=12.50% 00:13:13.457 lat (msec) : 250=25.00%, 1000=12.50% 00:13:13.457 cpu : usr=0.00%, sys=0.00%, ctx=8, majf=0, minf=1 00:13:13.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.457 issued rwts: total=5,3,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.457 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:13.457 00:13:13.457 Run status group 0 (all jobs): 00:13:13.457 READ: bw=15.3KiB/s (15.7kB/s), 15.3KiB/s-15.3KiB/s (15.7kB/s-15.7kB/s), io=20.0KiB (20.5kB), run=1308-1308msec 00:13:13.457 WRITE: bw=9394B/s (9394B/s), 9394B/s-9394B/s (9394B/s-9394B/s), io=12.0KiB (12.3kB), run=1308-1308msec 00:13:13.457 00:13:13.457 Disk stats (read/write): 00:13:13.457 sda: ios=53/2, merge=0/0, ticks=12/336, in_queue=348, util=92.70% 00:13:13.457 10:13:46 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:13:13.457 [global] 00:13:13.457 thread=1 00:13:13.457 invalidate=1 00:13:13.457 rw=randrw 00:13:13.457 time_based=1 00:13:13.457 runtime=1 00:13:13.457 ioengine=libaio 00:13:13.457 direct=1 00:13:13.457 bs=131072 00:13:13.457 iodepth=32 00:13:13.457 norandommap=0 00:13:13.457 numjobs=1 00:13:13.457 00:13:13.457 verify_dump=1 00:13:13.457 verify_backlog=512 00:13:13.457 verify_state_save=0 00:13:13.457 do_verify=1 00:13:13.457 verify=crc32c-intel 00:13:13.457 [job0] 00:13:13.457 filename=/dev/sda 00:13:13.457 queue_depth set to 113 (sda) 00:13:13.457 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:13:13.457 fio-3.35 00:13:13.457 Starting 1 thread 00:13:13.457 [2024-07-25 10:13:46.683664] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.363 [2024-07-25 10:13:48.457268] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.363 00:13:15.363 job0: (groupid=0, jobs=1): err= 0: pid=74234: Thu Jul 25 10:13:48 2024 00:13:15.363 read: IOPS=27, BW=3566KiB/s (3652kB/s)(5888KiB/1651msec) 00:13:15.363 slat (nsec): min=11447, max=76210, avg=29857.17, stdev=17040.16 00:13:15.363 clat (usec): min=273, max=123861, avg=4866.17, stdev=18054.85 00:13:15.363 lat (usec): min=296, max=123927, avg=4896.03, stdev=18059.50 00:13:15.363 clat percentiles (usec): 00:13:15.363 | 1.00th=[ 273], 5.00th=[ 277], 10.00th=[ 302], 20.00th=[ 515], 00:13:15.363 | 30.00th=[ 734], 40.00th=[ 1020], 50.00th=[ 1401], 60.00th=[ 1647], 00:13:15.363 | 70.00th=[ 2278], 80.00th=[ 5407], 90.00th=[ 5669], 95.00th=[ 5800], 00:13:15.363 | 99.00th=[124257], 99.50th=[124257], 99.90th=[124257], 99.95th=[124257], 00:13:15.363 | 99.99th=[124257] 00:13:15.363 bw ( KiB/s): min= 3328, max= 8448, per=100.00%, avg=5888.00, stdev=3620.39, samples=2 00:13:15.363 iops : min= 26, max= 66, avg=46.00, stdev=28.28, samples=2 00:13:15.363 write: IOPS=32, BW=4109KiB/s (4208kB/s)(6784KiB/1651msec); 0 zone resets 00:13:15.363 slat (usec): min=60, max=196, avg=95.01, stdev=33.15 00:13:15.363 clat (msec): min=104, max=1643, avg=979.14, stdev=458.96 00:13:15.363 lat (msec): min=104, max=1643, avg=979.24, stdev=458.96 00:13:15.363 clat percentiles (msec): 00:13:15.363 | 1.00th=[ 105], 5.00th=[ 142], 10.00th=[ 342], 20.00th=[ 693], 00:13:15.363 | 30.00th=[ 751], 40.00th=[ 827], 50.00th=[ 902], 60.00th=[ 953], 00:13:15.363 | 70.00th=[ 1301], 80.00th=[ 1502], 90.00th=[ 1636], 95.00th=[ 1636], 00:13:15.363 | 99.00th=[ 1636], 99.50th=[ 1636], 99.90th=[ 1636], 99.95th=[ 1636], 00:13:15.363 | 99.99th=[ 1636] 00:13:15.363 bw ( KiB/s): min= 255, max= 3584, per=45.68%, avg=1877.00, stdev=1666.13, samples=3 00:13:15.363 iops : min= 1, max= 28, avg=14.33, stdev=13.50, samples=3 00:13:15.363 lat (usec) : 500=9.09%, 750=5.05%, 1000=4.04% 00:13:15.363 lat (msec) : 2=12.12%, 4=3.03%, 10=12.12%, 250=6.06%, 500=2.02% 00:13:15.363 lat (msec) : 750=9.09%, 1000=20.20%, 2000=17.17% 00:13:15.363 cpu : usr=0.42%, sys=0.00%, ctx=90, majf=0, minf=1 00:13:15.363 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=8.1%, 16=16.2%, 32=68.7%, >=64=0.0% 00:13:15.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.363 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=1.4%, 64=0.0%, >=64=0.0% 00:13:15.363 issued rwts: total=46,53,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.363 latency : target=0, window=0, percentile=100.00%, depth=32 00:13:15.363 00:13:15.363 Run status group 0 (all jobs): 00:13:15.363 READ: bw=3566KiB/s (3652kB/s), 3566KiB/s-3566KiB/s (3652kB/s-3652kB/s), io=5888KiB (6029kB), run=1651-1651msec 00:13:15.363 WRITE: bw=4109KiB/s (4208kB/s), 4109KiB/s-4109KiB/s (4208kB/s-4208kB/s), io=6784KiB (6947kB), run=1651-1651msec 00:13:15.363 00:13:15.363 Disk stats (read/write): 00:13:15.363 sda: ios=94/51, merge=0/0, ticks=234/40905, in_queue=41140, util=94.45% 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@65 -- # rm -f ./local-job0-0-verify.state 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@67 -- # trap - SIGINT SIGTERM EXIT 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@69 -- # iscsicleanup 00:13:15.363 Cleaning up iSCSI connection 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:13:15.363 Logging out of session [sid: 63, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:13:15.363 Logout of [sid: 63, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@983 -- # rm -rf 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@70 -- # rpc_cmd bdev_rbd_delete Ceph0 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:15.363 [2024-07-25 10:13:48.576560] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Ceph0) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@71 -- # rpc_cmd bdev_rbd_unregister_cluster iscsi_rbd_cluster 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@72 -- # killprocess 74064 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@948 -- # '[' -z 74064 ']' 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@952 -- # kill -0 74064 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@953 -- # uname 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:15.363 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74064 00:13:15.622 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:15.622 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:15.622 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74064' 00:13:15.622 killing process with pid 74064 00:13:15.622 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@967 -- # kill 74064 00:13:15.622 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@972 -- # wait 74064 00:13:15.880 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@73 -- # rbd_cleanup 00:13:15.880 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:13:15.880 10:13:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:13:15.880 + base_dir=/var/tmp/ceph 00:13:15.880 + image=/var/tmp/ceph/ceph_raw.img 00:13:15.880 + dev=/dev/loop200 00:13:15.880 + pkill -9 ceph 00:13:15.880 + sleep 3 00:13:19.163 + umount /dev/loop200p2 00:13:19.163 umount: /dev/loop200p2: not mounted. 00:13:19.163 + losetup -d /dev/loop200 00:13:19.163 + rm -rf /var/tmp/ceph 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@75 -- # iscsitestfini 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:13:19.163 ************************************ 00:13:19.163 END TEST iscsi_tgt_rbd 00:13:19.163 ************************************ 00:13:19.163 00:13:19.163 real 0m30.144s 00:13:19.163 user 0m27.933s 00:13:19.163 sys 0m2.018s 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:13:19.163 10:13:52 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:13:19.163 10:13:52 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@57 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:13:19.163 10:13:52 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@59 -- # '[' 1 -eq 1 ']' 00:13:19.163 10:13:52 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@60 -- # run_test iscsi_tgt_initiator /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator/initiator.sh 00:13:19.163 10:13:52 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:19.163 10:13:52 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.163 10:13:52 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:13:19.163 ************************************ 00:13:19.163 START TEST iscsi_tgt_initiator 00:13:19.163 ************************************ 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator/initiator.sh 00:13:19.163 * Looking for test storage... 00:13:19.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@11 -- # iscsitestinit 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@16 -- # timing_enter start_iscsi_tgt 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@19 -- # pid=74365 00:13:19.163 iSCSI target launched. pid: 74365 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@20 -- # echo 'iSCSI target launched. pid: 74365' 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@21 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@22 -- # waitforlisten 74365 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@829 -- # '[' -z 74365 ']' 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:19.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:13:19.163 10:13:52 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@18 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:13:19.163 [2024-07-25 10:13:52.287180] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:13:19.163 [2024-07-25 10:13:52.287302] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74365 ] 00:13:19.421 [2024-07-25 10:13:52.651811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.679 [2024-07-25 10:13:52.756979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.964 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.964 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@862 -- # return 0 00:13:19.964 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@23 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:13:19.964 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.964 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:13:19.964 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.964 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@24 -- # rpc_cmd framework_start_init 00:13:19.964 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.964 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:13:20.224 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.224 iscsi_tgt is listening. Running tests... 00:13:20.224 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@25 -- # echo 'iscsi_tgt is listening. Running tests...' 00:13:20.224 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@27 -- # timing_exit start_iscsi_tgt 00:13:20.224 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:20.224 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@29 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@30 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@31 -- # rpc_cmd bdev_malloc_create 64 512 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:13:20.225 Malloc0 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@36 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.225 10:13:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@37 -- # sleep 1 00:13:21.185 10:13:54 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@38 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:13:21.185 10:13:54 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 5 -s 512 00:13:21.185 10:13:54 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@40 -- # initiator_json_config 00:13:21.185 10:13:54 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:13:21.443 [2024-07-25 10:13:54.479224] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:13:21.443 [2024-07-25 10:13:54.479815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74409 ] 00:13:21.700 [2024-07-25 10:13:54.725075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.700 [2024-07-25 10:13:54.820999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.700 Running I/O for 5 seconds... 00:13:26.964 00:13:26.964 Latency(us) 00:13:26.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.964 Job: iSCSI0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:26.964 Verification LBA range: start 0x0 length 0x4000 00:13:26.964 iSCSI0 : 5.00 17322.61 67.67 0.00 0.00 7357.58 787.99 10111.27 00:13:26.964 =================================================================================================================== 00:13:26.964 Total : 17322.61 67.67 0.00 0.00 7357.58 787.99 10111.27 00:13:26.964 10:14:00 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w unmap -t 5 -s 512 00:13:26.964 10:14:00 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@41 -- # initiator_json_config 00:13:26.964 10:14:00 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:13:26.964 [2024-07-25 10:14:00.145969] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:13:26.964 [2024-07-25 10:14:00.146054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74481 ] 00:13:27.221 [2024-07-25 10:14:00.395259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.479 [2024-07-25 10:14:00.491424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.479 Running I/O for 5 seconds... 00:13:32.745 00:13:32.745 Latency(us) 00:13:32.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.745 Job: iSCSI0 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:13:32.745 iSCSI0 : 5.00 42282.67 165.17 0.00 0.00 3023.32 1505.77 6865.68 00:13:32.745 =================================================================================================================== 00:13:32.745 Total : 42282.67 165.17 0.00 0.00 3023.32 1505.77 6865.68 00:13:32.745 10:14:05 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w flush -t 5 -s 512 00:13:32.745 10:14:05 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@42 -- # initiator_json_config 00:13:32.745 10:14:05 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:13:32.745 [2024-07-25 10:14:05.829796] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:13:32.745 [2024-07-25 10:14:05.830140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74546 ] 00:13:33.003 [2024-07-25 10:14:06.078084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.003 [2024-07-25 10:14:06.165480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.003 Running I/O for 5 seconds... 00:13:38.301 00:13:38.301 Latency(us) 00:13:38.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.301 Job: iSCSI0 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:13:38.301 iSCSI0 : 5.00 54998.74 214.84 0.00 0.00 2324.39 741.18 3666.90 00:13:38.301 =================================================================================================================== 00:13:38.301 Total : 54998.74 214.84 0.00 0.00 2324.39 741.18 3666.90 00:13:38.301 10:14:11 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w reset -t 10 -s 512 00:13:38.301 10:14:11 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@43 -- # initiator_json_config 00:13:38.301 10:14:11 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:13:38.301 [2024-07-25 10:14:11.489228] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:13:38.301 [2024-07-25 10:14:11.489776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74605 ] 00:13:38.559 [2024-07-25 10:14:11.741275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.817 [2024-07-25 10:14:11.838510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.817 Running I/O for 10 seconds... 00:13:48.779 00:13:48.779 Latency(us) 00:13:48.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.779 Job: iSCSI0 (Core Mask 0x1, workload: reset, depth: 128, IO size: 4096) 00:13:48.779 Verification LBA range: start 0x0 length 0x4000 00:13:48.779 iSCSI0 : 10.01 16437.85 64.21 0.00 0.00 7755.82 1677.41 6616.02 00:13:48.779 =================================================================================================================== 00:13:48.779 Total : 16437.85 64.21 0.00 0.00 7755.82 1677.41 6616.02 00:13:49.037 10:14:22 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:13:49.037 10:14:22 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@47 -- # killprocess 74365 00:13:49.037 10:14:22 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@948 -- # '[' -z 74365 ']' 00:13:49.037 10:14:22 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@952 -- # kill -0 74365 00:13:49.037 10:14:22 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@953 -- # uname 00:13:49.037 10:14:22 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.037 10:14:22 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74365 00:13:49.037 killing process with pid 74365 00:13:49.037 10:14:22 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:49.037 10:14:22 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:49.037 10:14:22 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74365' 00:13:49.037 10:14:22 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@967 -- # kill 74365 00:13:49.037 10:14:22 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@972 -- # wait 74365 00:13:49.296 10:14:22 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@49 -- # iscsitestfini 00:13:49.296 10:14:22 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:13:49.296 00:13:49.296 real 0m30.388s 00:13:49.296 user 0m42.222s 00:13:49.296 sys 0m11.726s 00:13:49.296 ************************************ 00:13:49.296 END TEST iscsi_tgt_initiator 00:13:49.296 ************************************ 00:13:49.296 10:14:22 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.296 10:14:22 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:13:49.296 10:14:22 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:13:49.296 10:14:22 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@61 -- # run_test iscsi_tgt_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait/bdev_io_wait.sh 00:13:49.296 10:14:22 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:49.296 10:14:22 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.296 10:14:22 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:13:49.555 ************************************ 00:13:49.555 START TEST iscsi_tgt_bdev_io_wait 00:13:49.555 ************************************ 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait/bdev_io_wait.sh 00:13:49.555 * Looking for test storage... 00:13:49.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@11 -- # iscsitestinit 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@16 -- # timing_enter start_iscsi_tgt 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:49.555 iSCSI target launched. pid: 74771 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@19 -- # pid=74771 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@20 -- # echo 'iSCSI target launched. pid: 74771' 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@21 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@22 -- # waitforlisten 74771 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 74771 ']' 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@18 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:49.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:49.555 10:14:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:49.555 [2024-07-25 10:14:22.721533] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:13:49.555 [2024-07-25 10:14:22.721619] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74771 ] 00:13:49.812 [2024-07-25 10:14:22.971307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.812 [2024-07-25 10:14:23.056067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@23 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@25 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@26 -- # rpc_cmd framework_start_init 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:50.744 iscsi_tgt is listening. Running tests... 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@27 -- # echo 'iscsi_tgt is listening. Running tests...' 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@29 -- # timing_exit start_iscsi_tgt 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@31 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@32 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@33 -- # rpc_cmd bdev_malloc_create 64 512 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.744 10:14:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:51.001 Malloc0 00:13:51.001 10:14:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.001 10:14:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@38 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:13:51.001 10:14:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.001 10:14:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:51.001 10:14:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.001 10:14:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@39 -- # sleep 1 00:13:51.960 10:14:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@40 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:13:51.960 10:14:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w write -t 1 00:13:51.960 10:14:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@42 -- # initiator_json_config 00:13:51.960 10:14:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:13:51.960 [2024-07-25 10:14:25.102087] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:13:51.960 [2024-07-25 10:14:25.102197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74821 ] 00:13:52.217 [2024-07-25 10:14:25.245861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.217 [2024-07-25 10:14:25.364661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.217 Running I/O for 1 seconds... 00:13:53.588 00:13:53.588 Latency(us) 00:13:53.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.589 Job: iSCSI0 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:13:53.589 iSCSI0 : 1.00 33308.28 130.11 0.00 0.00 3833.78 1380.94 4805.97 00:13:53.589 =================================================================================================================== 00:13:53.589 Total : 33308.28 130.11 0.00 0.00 3833.78 1380.94 4805.97 00:13:53.589 10:14:26 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w read -t 1 00:13:53.589 10:14:26 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@43 -- # initiator_json_config 00:13:53.589 10:14:26 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:13:53.589 [2024-07-25 10:14:26.745868] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:13:53.589 [2024-07-25 10:14:26.746018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74836 ] 00:13:53.847 [2024-07-25 10:14:26.888522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.847 [2024-07-25 10:14:27.006921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.104 Running I/O for 1 seconds... 00:13:55.038 00:13:55.038 Latency(us) 00:13:55.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.038 Job: iSCSI0 (Core Mask 0x1, workload: read, depth: 128, IO size: 4096) 00:13:55.038 iSCSI0 : 1.00 41942.02 163.84 0.00 0.00 3044.43 1154.68 3682.50 00:13:55.038 =================================================================================================================== 00:13:55.038 Total : 41942.02 163.84 0.00 0.00 3044.43 1154.68 3682.50 00:13:55.295 10:14:28 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w flush -t 1 00:13:55.295 10:14:28 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@44 -- # initiator_json_config 00:13:55.295 10:14:28 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:13:55.295 [2024-07-25 10:14:28.384047] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:13:55.295 [2024-07-25 10:14:28.384394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74862 ] 00:13:55.295 [2024-07-25 10:14:28.524785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.553 [2024-07-25 10:14:28.624972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.553 Running I/O for 1 seconds... 00:13:56.489 00:13:56.489 Latency(us) 00:13:56.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.489 Job: iSCSI0 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:13:56.489 iSCSI0 : 1.00 49056.48 191.63 0.00 0.00 2603.75 647.56 2964.72 00:13:56.489 =================================================================================================================== 00:13:56.489 Total : 49056.48 191.63 0.00 0.00 2603.75 647.56 2964.72 00:13:56.748 10:14:29 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w unmap -t 1 00:13:56.748 10:14:29 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@45 -- # initiator_json_config 00:13:56.748 10:14:29 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:13:56.748 [2024-07-25 10:14:30.004315] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:13:56.748 [2024-07-25 10:14:30.004444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74877 ] 00:13:57.134 [2024-07-25 10:14:30.154243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.134 [2024-07-25 10:14:30.282245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.392 Running I/O for 1 seconds... 00:13:58.325 00:13:58.325 Latency(us) 00:13:58.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.325 Job: iSCSI0 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:13:58.325 iSCSI0 : 1.00 34571.29 135.04 0.00 0.00 3694.10 1435.55 4837.18 00:13:58.325 =================================================================================================================== 00:13:58.325 Total : 34571.29 135.04 0.00 0.00 3694.10 1435.55 4837.18 00:13:58.583 10:14:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:13:58.583 10:14:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@49 -- # killprocess 74771 00:13:58.583 10:14:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 74771 ']' 00:13:58.583 10:14:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 74771 00:13:58.583 10:14:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:13:58.583 10:14:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:58.583 10:14:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74771 00:13:58.583 killing process with pid 74771 00:13:58.583 10:14:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:58.583 10:14:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:58.583 10:14:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74771' 00:13:58.583 10:14:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 74771 00:13:58.583 10:14:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 74771 00:13:58.842 10:14:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@51 -- # iscsitestfini 00:13:58.842 10:14:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:13:58.842 00:13:58.842 real 0m9.384s 00:13:58.842 user 0m12.574s 00:13:58.842 sys 0m2.979s 00:13:58.842 10:14:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:58.842 ************************************ 00:13:58.842 END TEST iscsi_tgt_bdev_io_wait 00:13:58.842 ************************************ 00:13:58.842 10:14:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:58.842 10:14:31 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:13:58.842 10:14:31 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@62 -- # run_test iscsi_tgt_resize /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize/resize.sh 00:13:58.842 10:14:31 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:58.842 10:14:31 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.842 10:14:31 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:13:58.842 ************************************ 00:13:58.842 START TEST iscsi_tgt_resize 00:13:58.842 ************************************ 00:13:58.842 10:14:31 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize/resize.sh 00:13:58.842 * Looking for test storage... 00:13:58.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@12 -- # iscsitestinit 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@14 -- # BDEV_SIZE=64 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@15 -- # BDEV_NEW_SIZE=128 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@16 -- # BLOCK_SIZE=512 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@17 -- # RESIZE_SOCK=/var/tmp/spdk-resize.sock 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@19 -- # timing_enter start_iscsi_tgt 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@22 -- # rm -f /var/tmp/spdk-resize.sock 00:13:58.842 iSCSI target launched. pid: 74956 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@25 -- # pid=74956 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@26 -- # echo 'iSCSI target launched. pid: 74956' 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@27 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@28 -- # waitforlisten 74956 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@829 -- # '[' -z 74956 ']' 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:13:58.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:58.842 10:14:32 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:13:59.100 [2024-07-25 10:14:32.176499] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:13:59.100 [2024-07-25 10:14:32.176644] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74956 ] 00:13:59.357 [2024-07-25 10:14:32.446554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.357 [2024-07-25 10:14:32.543315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.925 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.925 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@862 -- # return 0 00:13:59.925 10:14:33 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@29 -- # rpc_cmd framework_start_init 00:13:59.925 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.925 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:13:59.925 iscsi_tgt is listening. Running tests... 00:13:59.925 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.925 10:14:33 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@30 -- # echo 'iscsi_tgt is listening. Running tests...' 00:13:59.925 10:14:33 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@32 -- # timing_exit start_iscsi_tgt 00:13:59.925 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:59.925 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@34 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@35 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@36 -- # rpc_cmd bdev_null_create Null0 64 512 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:14:00.184 Null0 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@41 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Null0:0 1:2 256 -d 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.184 10:14:33 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@42 -- # sleep 1 00:14:01.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-resize.sock... 00:14:01.197 10:14:34 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@43 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:14:01.197 10:14:34 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@47 -- # bdevperf_pid=74999 00:14:01.197 10:14:34 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@48 -- # waitforlisten 74999 /var/tmp/spdk-resize.sock 00:14:01.197 10:14:34 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@829 -- # '[' -z 74999 ']' 00:14:01.197 10:14:34 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-resize.sock 00:14:01.197 10:14:34 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.197 10:14:34 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-resize.sock --json /dev/fd/63 -q 16 -o 4096 -w read -t 5 -R -s 128 -z 00:14:01.197 10:14:34 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-resize.sock...' 00:14:01.197 10:14:34 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@46 -- # initiator_json_config 00:14:01.197 10:14:34 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.197 10:14:34 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@139 -- # jq . 00:14:01.197 10:14:34 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:14:01.197 [2024-07-25 10:14:34.311382] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:14:01.198 [2024-07-25 10:14:34.311500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 128 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74999 ] 00:14:01.455 [2024-07-25 10:14:34.474468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.455 [2024-07-25 10:14:34.562417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.023 10:14:35 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.023 10:14:35 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@862 -- # return 0 00:14:02.023 10:14:35 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@50 -- # rpc_cmd bdev_null_resize Null0 128 00:14:02.023 10:14:35 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.023 10:14:35 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:14:02.023 [2024-07-25 10:14:35.201183] lun.c: 402:bdev_event_cb: *NOTICE*: bdev name (Null0) received event(SPDK_BDEV_EVENT_RESIZE) 00:14:02.023 true 00:14:02.023 10:14:35 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.023 10:14:35 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # rpc_cmd -s /var/tmp/spdk-resize.sock bdev_get_bdevs 00:14:02.023 10:14:35 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.023 10:14:35 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:14:02.023 10:14:35 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # jq '.[].num_blocks' 00:14:02.023 10:14:35 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.023 10:14:35 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # num_block=131072 00:14:02.023 10:14:35 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@54 -- # total_size=64 00:14:02.023 10:14:35 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@55 -- # '[' 64 '!=' 64 ']' 00:14:02.023 10:14:35 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@59 -- # sleep 2 00:14:04.550 10:14:37 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@61 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-resize.sock perform_tests 00:14:04.550 Running I/O for 5 seconds... 00:14:09.838 00:14:09.838 Latency(us) 00:14:09.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.838 Job: iSCSI0 (Core Mask 0x1, workload: read, depth: 16, IO size: 4096) 00:14:09.838 iSCSI0 : 5.00 41811.73 163.33 0.00 0.00 379.76 190.17 1810.04 00:14:09.838 =================================================================================================================== 00:14:09.838 Total : 41811.73 163.33 0.00 0.00 379.76 190.17 1810.04 00:14:09.838 0 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # jq '.[].num_blocks' 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # rpc_cmd -s /var/tmp/spdk-resize.sock bdev_get_bdevs 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # num_block=262144 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@65 -- # total_size=128 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@66 -- # '[' 128 '!=' 128 ']' 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@72 -- # killprocess 74999 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@948 -- # '[' -z 74999 ']' 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@952 -- # kill -0 74999 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # uname 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74999 00:14:09.838 killing process with pid 74999 00:14:09.838 Received shutdown signal, test time was about 5.000000 seconds 00:14:09.838 00:14:09.838 Latency(us) 00:14:09.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.838 =================================================================================================================== 00:14:09.838 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74999' 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@967 -- # kill 74999 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@972 -- # wait 74999 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@73 -- # killprocess 74956 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@948 -- # '[' -z 74956 ']' 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@952 -- # kill -0 74956 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # uname 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74956 00:14:09.838 killing process with pid 74956 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74956' 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@967 -- # kill 74956 00:14:09.838 10:14:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@972 -- # wait 74956 00:14:09.838 10:14:43 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@75 -- # iscsitestfini 00:14:09.838 10:14:43 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:14:09.838 00:14:09.838 real 0m11.064s 00:14:09.838 user 0m16.336s 00:14:09.838 sys 0m3.363s 00:14:09.838 ************************************ 00:14:09.838 END TEST iscsi_tgt_resize 00:14:09.838 ************************************ 00:14:09.838 10:14:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:09.838 10:14:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:14:10.097 10:14:43 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:14:10.097 10:14:43 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@65 -- # cleanup_veth_interfaces 00:14:10.097 10:14:43 iscsi_tgt -- iscsi_tgt/common.sh@95 -- # ip link set init_br nomaster 00:14:10.097 10:14:43 iscsi_tgt -- iscsi_tgt/common.sh@96 -- # ip link set tgt_br nomaster 00:14:10.097 10:14:43 iscsi_tgt -- iscsi_tgt/common.sh@97 -- # ip link set tgt_br2 nomaster 00:14:10.097 10:14:43 iscsi_tgt -- iscsi_tgt/common.sh@98 -- # ip link set init_br down 00:14:10.097 10:14:43 iscsi_tgt -- iscsi_tgt/common.sh@99 -- # ip link set tgt_br down 00:14:10.097 10:14:43 iscsi_tgt -- iscsi_tgt/common.sh@100 -- # ip link set tgt_br2 down 00:14:10.097 10:14:43 iscsi_tgt -- iscsi_tgt/common.sh@101 -- # ip link delete iscsi_br type bridge 00:14:10.097 10:14:43 iscsi_tgt -- iscsi_tgt/common.sh@102 -- # ip link delete spdk_init_int 00:14:10.097 10:14:43 iscsi_tgt -- iscsi_tgt/common.sh@103 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:14:10.097 10:14:43 iscsi_tgt -- iscsi_tgt/common.sh@104 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:14:10.097 10:14:43 iscsi_tgt -- iscsi_tgt/common.sh@105 -- # ip netns del spdk_iscsi_ns 00:14:10.097 10:14:43 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:14:10.097 ************************************ 00:14:10.097 END TEST iscsi_tgt 00:14:10.097 ************************************ 00:14:10.097 00:14:10.097 real 7m18.054s 00:14:10.097 user 13m15.802s 00:14:10.097 sys 1m50.267s 00:14:10.097 10:14:43 iscsi_tgt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:10.097 10:14:43 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:14:10.098 10:14:43 -- common/autotest_common.sh@1142 -- # return 0 00:14:10.098 10:14:43 -- spdk/autotest.sh@264 -- # run_test spdkcli_iscsi /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:14:10.098 10:14:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:10.098 10:14:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.098 10:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:10.098 ************************************ 00:14:10.098 START TEST spdkcli_iscsi 00:14:10.098 ************************************ 00:14:10.098 10:14:43 spdkcli_iscsi -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:14:10.366 * Looking for test storage... 00:14:10.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:14:10.366 10:14:43 spdkcli_iscsi -- spdkcli/iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:14:10.366 10:14:43 spdkcli_iscsi -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:14:10.366 10:14:43 spdkcli_iscsi -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:14:10.366 10:14:43 spdkcli_iscsi -- spdkcli/iscsi.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:14:10.366 10:14:43 spdkcli_iscsi -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:14:10.366 10:14:43 spdkcli_iscsi -- spdkcli/iscsi.sh@12 -- # MATCH_FILE=spdkcli_iscsi.test 00:14:10.366 10:14:43 spdkcli_iscsi -- spdkcli/iscsi.sh@13 -- # SPDKCLI_BRANCH=/iscsi 00:14:10.366 10:14:43 spdkcli_iscsi -- spdkcli/iscsi.sh@15 -- # trap cleanup EXIT 00:14:10.366 10:14:43 spdkcli_iscsi -- spdkcli/iscsi.sh@17 -- # timing_enter run_iscsi_tgt 00:14:10.366 10:14:43 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:10.366 10:14:43 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:14:10.366 10:14:43 spdkcli_iscsi -- spdkcli/iscsi.sh@21 -- # iscsi_tgt_pid=75209 00:14:10.366 10:14:43 spdkcli_iscsi -- spdkcli/iscsi.sh@22 -- # waitforlisten 75209 00:14:10.366 10:14:43 spdkcli_iscsi -- common/autotest_common.sh@829 -- # '[' -z 75209 ']' 00:14:10.366 10:14:43 spdkcli_iscsi -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.366 10:14:43 spdkcli_iscsi -- spdkcli/iscsi.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x3 -p 0 --wait-for-rpc 00:14:10.366 10:14:43 spdkcli_iscsi -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.366 10:14:43 spdkcli_iscsi -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.366 10:14:43 spdkcli_iscsi -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.366 10:14:43 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:14:10.366 [2024-07-25 10:14:43.491177] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:14:10.366 [2024-07-25 10:14:43.491290] [ DPDK EAL parameters: iscsi --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75209 ] 00:14:10.624 [2024-07-25 10:14:43.629541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:10.624 [2024-07-25 10:14:43.737203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.624 [2024-07-25 10:14:43.737210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.190 10:14:44 spdkcli_iscsi -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.190 10:14:44 spdkcli_iscsi -- common/autotest_common.sh@862 -- # return 0 00:14:11.190 10:14:44 spdkcli_iscsi -- spdkcli/iscsi.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:11.755 10:14:44 spdkcli_iscsi -- spdkcli/iscsi.sh@25 -- # timing_exit run_iscsi_tgt 00:14:11.755 10:14:44 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:11.755 10:14:44 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:14:11.755 10:14:44 spdkcli_iscsi -- spdkcli/iscsi.sh@27 -- # timing_enter spdkcli_create_iscsi_config 00:14:11.755 10:14:44 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:11.755 10:14:44 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:14:11.755 10:14:44 spdkcli_iscsi -- spdkcli/iscsi.sh@48 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc0'\'' '\''Malloc0'\'' True 00:14:11.755 '\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:14:11.755 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:14:11.755 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:14:11.755 '\''/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"'\'' '\''host=127.0.0.1, port=3261'\'' True 00:14:11.755 '\''/iscsi/portal_groups create 2 127.0.0.1:3262'\'' '\''host=127.0.0.1, port=3262'\'' True 00:14:11.755 '\''/iscsi/initiator_groups create 2 ANY 10.0.2.15/32'\'' '\''hostname=ANY, netmask=10.0.2.15/32'\'' True 00:14:11.755 '\''/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32'\'' '\''hostname=ANZ, netmask=10.0.2.15/32'\'' True 00:14:11.755 '\''/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32'\'' '\''hostname=ANW, netmask=10.0.2.16'\'' True 00:14:11.755 '\''/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1'\'' '\''Target0'\'' True 00:14:11.755 '\''/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1'\'' '\''Target1'\'' True 00:14:11.755 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' True 00:14:11.755 '\''/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2'\'' '\''Malloc3'\'' True 00:14:11.755 '\''/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"'\'' '\''user=test3'\'' True 00:14:11.755 '\''/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2'\'' '\''user=test2'\'' True 00:14:11.755 '\''/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"'\'' '\''user=test4'\'' True 00:14:11.755 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true'\'' '\''disable_chap: True'\'' True 00:14:11.755 '\''/iscsi/global_params set_auth g=1 d=true r=false'\'' '\''disable_chap: True'\'' True 00:14:11.755 '\''/iscsi ls'\'' '\''Malloc'\'' True 00:14:11.755 ' 00:14:19.875 Executing command: ['/bdevs/malloc create 32 512 Malloc0', 'Malloc0', True] 00:14:19.875 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:14:19.875 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:14:19.875 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:14:19.875 Executing command: ['/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"', 'host=127.0.0.1, port=3261', True] 00:14:19.876 Executing command: ['/iscsi/portal_groups create 2 127.0.0.1:3262', 'host=127.0.0.1, port=3262', True] 00:14:19.876 Executing command: ['/iscsi/initiator_groups create 2 ANY 10.0.2.15/32', 'hostname=ANY, netmask=10.0.2.15/32', True] 00:14:19.876 Executing command: ['/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32', 'hostname=ANZ, netmask=10.0.2.15/32', True] 00:14:19.876 Executing command: ['/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32', 'hostname=ANW, netmask=10.0.2.16', True] 00:14:19.876 Executing command: ['/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1', 'Target0', True] 00:14:19.876 Executing command: ['/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1', 'Target1', True] 00:14:19.876 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', True] 00:14:19.876 Executing command: ['/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2', 'Malloc3', True] 00:14:19.876 Executing command: ['/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"', 'user=test3', True] 00:14:19.876 Executing command: ['/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2', 'user=test2', True] 00:14:19.876 Executing command: ['/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"', 'user=test4', True] 00:14:19.876 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true', 'disable_chap: True', True] 00:14:19.876 Executing command: ['/iscsi/global_params set_auth g=1 d=true r=false', 'disable_chap: True', True] 00:14:19.876 Executing command: ['/iscsi ls', 'Malloc', True] 00:14:19.876 10:14:52 spdkcli_iscsi -- spdkcli/iscsi.sh@49 -- # timing_exit spdkcli_create_iscsi_config 00:14:19.876 10:14:52 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:19.876 10:14:52 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:14:19.876 10:14:52 spdkcli_iscsi -- spdkcli/iscsi.sh@51 -- # timing_enter spdkcli_check_match 00:14:19.876 10:14:52 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:19.876 10:14:52 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:14:19.876 10:14:52 spdkcli_iscsi -- spdkcli/iscsi.sh@52 -- # check_match 00:14:19.876 10:14:52 spdkcli_iscsi -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /iscsi 00:14:19.876 10:14:52 spdkcli_iscsi -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test.match 00:14:19.876 10:14:52 spdkcli_iscsi -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test 00:14:19.876 10:14:52 spdkcli_iscsi -- spdkcli/iscsi.sh@53 -- # timing_exit spdkcli_check_match 00:14:19.876 10:14:52 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:19.876 10:14:52 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:14:19.876 10:14:52 spdkcli_iscsi -- spdkcli/iscsi.sh@55 -- # timing_enter spdkcli_clear_iscsi_config 00:14:19.876 10:14:52 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:19.876 10:14:52 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:14:19.876 10:14:52 spdkcli_iscsi -- spdkcli/iscsi.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/iscsi/auth_groups delete_secret 1 test2'\'' '\''user=test2'\'' 00:14:19.876 '\''/iscsi/auth_groups delete_secret_all 1'\'' '\''user=test1'\'' 00:14:19.876 '\''/iscsi/auth_groups delete 1'\'' '\''user=test1'\'' 00:14:19.876 '\''/iscsi/auth_groups delete_all'\'' '\''user=test4'\'' 00:14:19.876 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' 00:14:19.876 '\''/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1'\'' '\''Target1'\'' 00:14:19.876 '\''/iscsi/target_nodes delete_all'\'' '\''Target0'\'' 00:14:19.876 '\''/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32'\'' '\''ANW'\'' 00:14:19.876 '\''/iscsi/initiator_groups delete 3'\'' '\''ANZ'\'' 00:14:19.876 '\''/iscsi/initiator_groups delete_all'\'' '\''ANY'\'' 00:14:19.876 '\''/iscsi/portal_groups delete 1'\'' '\''127.0.0.1:3261'\'' 00:14:19.876 '\''/iscsi/portal_groups delete_all'\'' '\''127.0.0.1:3262'\'' 00:14:19.876 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:14:19.876 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:14:19.876 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:14:19.876 '\''/bdevs/malloc delete Malloc0'\'' '\''Malloc0'\'' 00:14:19.876 ' 00:14:26.481 Executing command: ['/iscsi/auth_groups delete_secret 1 test2', 'user=test2', False] 00:14:26.481 Executing command: ['/iscsi/auth_groups delete_secret_all 1', 'user=test1', False] 00:14:26.481 Executing command: ['/iscsi/auth_groups delete 1', 'user=test1', False] 00:14:26.481 Executing command: ['/iscsi/auth_groups delete_all', 'user=test4', False] 00:14:26.481 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', False] 00:14:26.481 Executing command: ['/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1', 'Target1', False] 00:14:26.481 Executing command: ['/iscsi/target_nodes delete_all', 'Target0', False] 00:14:26.481 Executing command: ['/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32', 'ANW', False] 00:14:26.481 Executing command: ['/iscsi/initiator_groups delete 3', 'ANZ', False] 00:14:26.481 Executing command: ['/iscsi/initiator_groups delete_all', 'ANY', False] 00:14:26.481 Executing command: ['/iscsi/portal_groups delete 1', '127.0.0.1:3261', False] 00:14:26.481 Executing command: ['/iscsi/portal_groups delete_all', '127.0.0.1:3262', False] 00:14:26.481 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:14:26.481 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:14:26.481 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:14:26.481 Executing command: ['/bdevs/malloc delete Malloc0', 'Malloc0', False] 00:14:26.481 10:14:59 spdkcli_iscsi -- spdkcli/iscsi.sh@73 -- # timing_exit spdkcli_clear_iscsi_config 00:14:26.481 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:26.481 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:14:26.481 10:14:59 spdkcli_iscsi -- spdkcli/iscsi.sh@75 -- # killprocess 75209 00:14:26.481 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@948 -- # '[' -z 75209 ']' 00:14:26.481 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@952 -- # kill -0 75209 00:14:26.481 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@953 -- # uname 00:14:26.481 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.481 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75209 00:14:26.481 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:26.481 killing process with pid 75209 00:14:26.481 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:26.481 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75209' 00:14:26.481 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@967 -- # kill 75209 00:14:26.481 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@972 -- # wait 75209 00:14:26.481 10:14:59 spdkcli_iscsi -- spdkcli/iscsi.sh@1 -- # cleanup 00:14:26.481 10:14:59 spdkcli_iscsi -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:14:26.481 10:14:59 spdkcli_iscsi -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:14:26.481 10:14:59 spdkcli_iscsi -- spdkcli/common.sh@16 -- # '[' -n 75209 ']' 00:14:26.481 10:14:59 spdkcli_iscsi -- spdkcli/common.sh@17 -- # killprocess 75209 00:14:26.481 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@948 -- # '[' -z 75209 ']' 00:14:26.481 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@952 -- # kill -0 75209 00:14:26.481 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (75209) - No such process 00:14:26.481 Process with pid 75209 is not found 00:14:26.481 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@975 -- # echo 'Process with pid 75209 is not found' 00:14:26.481 10:14:59 spdkcli_iscsi -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:14:26.481 10:14:59 spdkcli_iscsi -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_iscsi.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:14:26.804 00:14:26.804 real 0m16.418s 00:14:26.804 user 0m35.117s 00:14:26.804 sys 0m1.126s 00:14:26.804 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:26.804 ************************************ 00:14:26.804 END TEST spdkcli_iscsi 00:14:26.804 ************************************ 00:14:26.804 10:14:59 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:14:26.804 10:14:59 -- common/autotest_common.sh@1142 -- # return 0 00:14:26.804 10:14:59 -- spdk/autotest.sh@267 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:14:26.804 10:14:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:26.804 10:14:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.804 10:14:59 -- common/autotest_common.sh@10 -- # set +x 00:14:26.804 ************************************ 00:14:26.804 START TEST spdkcli_raid 00:14:26.804 ************************************ 00:14:26.804 10:14:59 spdkcli_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:14:26.804 * Looking for test storage... 00:14:26.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:14:26.804 10:14:59 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:14:26.804 10:14:59 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:26.804 10:14:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=75512 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:14:26.804 10:14:59 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 75512 00:14:26.804 10:14:59 spdkcli_raid -- common/autotest_common.sh@829 -- # '[' -z 75512 ']' 00:14:26.804 10:14:59 spdkcli_raid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.804 10:14:59 spdkcli_raid -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.804 10:14:59 spdkcli_raid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.804 10:14:59 spdkcli_raid -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.804 10:14:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:26.804 [2024-07-25 10:14:59.969482] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:14:26.804 [2024-07-25 10:14:59.969581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75512 ] 00:14:27.062 [2024-07-25 10:15:00.109201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:27.062 [2024-07-25 10:15:00.225450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.062 [2024-07-25 10:15:00.225457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.994 10:15:00 spdkcli_raid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:27.994 10:15:00 spdkcli_raid -- common/autotest_common.sh@862 -- # return 0 00:14:27.994 10:15:00 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:14:27.994 10:15:00 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:27.994 10:15:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:27.994 10:15:00 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:14:27.994 10:15:00 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:27.994 10:15:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:27.995 10:15:00 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:14:27.995 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:14:27.995 ' 00:14:29.366 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:14:29.366 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:14:29.366 10:15:02 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:14:29.366 10:15:02 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.366 10:15:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.366 10:15:02 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:14:29.366 10:15:02 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:29.366 10:15:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.366 10:15:02 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:14:29.366 ' 00:14:30.739 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:14:30.739 10:15:03 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:14:30.739 10:15:03 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:30.739 10:15:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:30.739 10:15:03 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:14:30.739 10:15:03 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:30.739 10:15:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:30.739 10:15:03 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:14:30.739 10:15:03 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:14:31.305 10:15:04 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:14:31.305 10:15:04 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:14:31.305 10:15:04 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:14:31.305 10:15:04 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:31.305 10:15:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:31.305 10:15:04 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:14:31.305 10:15:04 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.305 10:15:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:31.305 10:15:04 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:14:31.305 ' 00:14:32.681 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:14:32.681 10:15:05 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:14:32.681 10:15:05 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.681 10:15:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:32.681 10:15:05 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:14:32.681 10:15:05 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:32.681 10:15:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:32.681 10:15:05 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:14:32.681 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:14:32.681 ' 00:14:34.055 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:14:34.055 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:14:34.055 10:15:07 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:14:34.055 10:15:07 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:34.055 10:15:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.055 10:15:07 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 75512 00:14:34.055 10:15:07 spdkcli_raid -- common/autotest_common.sh@948 -- # '[' -z 75512 ']' 00:14:34.055 10:15:07 spdkcli_raid -- common/autotest_common.sh@952 -- # kill -0 75512 00:14:34.055 10:15:07 spdkcli_raid -- common/autotest_common.sh@953 -- # uname 00:14:34.055 10:15:07 spdkcli_raid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.055 10:15:07 spdkcli_raid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75512 00:14:34.055 10:15:07 spdkcli_raid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:34.055 killing process with pid 75512 00:14:34.055 10:15:07 spdkcli_raid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:34.055 10:15:07 spdkcli_raid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75512' 00:14:34.055 10:15:07 spdkcli_raid -- common/autotest_common.sh@967 -- # kill 75512 00:14:34.055 10:15:07 spdkcli_raid -- common/autotest_common.sh@972 -- # wait 75512 00:14:34.313 10:15:07 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:14:34.313 10:15:07 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 75512 ']' 00:14:34.313 10:15:07 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 75512 00:14:34.313 10:15:07 spdkcli_raid -- common/autotest_common.sh@948 -- # '[' -z 75512 ']' 00:14:34.313 10:15:07 spdkcli_raid -- common/autotest_common.sh@952 -- # kill -0 75512 00:14:34.313 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (75512) - No such process 00:14:34.313 Process with pid 75512 is not found 00:14:34.313 10:15:07 spdkcli_raid -- common/autotest_common.sh@975 -- # echo 'Process with pid 75512 is not found' 00:14:34.313 10:15:07 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:14:34.313 10:15:07 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:14:34.313 10:15:07 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:14:34.313 10:15:07 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:14:34.313 00:14:34.313 real 0m7.699s 00:14:34.313 user 0m16.685s 00:14:34.313 sys 0m0.903s 00:14:34.313 10:15:07 spdkcli_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:34.313 10:15:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.313 ************************************ 00:14:34.313 END TEST spdkcli_raid 00:14:34.313 ************************************ 00:14:34.313 10:15:07 -- common/autotest_common.sh@1142 -- # return 0 00:14:34.313 10:15:07 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:14:34.313 10:15:07 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:14:34.313 10:15:07 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:14:34.313 10:15:07 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:14:34.313 10:15:07 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:14:34.313 10:15:07 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:14:34.313 10:15:07 -- spdk/autotest.sh@330 -- # '[' 1 -eq 1 ']' 00:14:34.313 10:15:07 -- spdk/autotest.sh@331 -- # run_test blockdev_rbd /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh rbd 00:14:34.313 10:15:07 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:34.313 10:15:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.313 10:15:07 -- common/autotest_common.sh@10 -- # set +x 00:14:34.313 ************************************ 00:14:34.313 START TEST blockdev_rbd 00:14:34.313 ************************************ 00:14:34.313 10:15:07 blockdev_rbd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh rbd 00:14:34.572 * Looking for test storage... 00:14:34.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:34.572 10:15:07 blockdev_rbd -- bdev/nbd_common.sh@6 -- # set -e 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@20 -- # : 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@673 -- # uname -s 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@681 -- # test_type=rbd 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@682 -- # crypto_device= 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@683 -- # dek= 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@684 -- # env_ctx= 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@689 -- # [[ rbd == bdev ]] 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@689 -- # [[ rbd == crypto_* ]] 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75755 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:34.572 10:15:07 blockdev_rbd -- bdev/blockdev.sh@49 -- # waitforlisten 75755 00:14:34.572 10:15:07 blockdev_rbd -- common/autotest_common.sh@829 -- # '[' -z 75755 ']' 00:14:34.572 10:15:07 blockdev_rbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.572 10:15:07 blockdev_rbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.572 10:15:07 blockdev_rbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.572 10:15:07 blockdev_rbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.572 10:15:07 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:34.572 [2024-07-25 10:15:07.712472] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:14:34.572 [2024-07-25 10:15:07.713483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75755 ] 00:14:34.830 [2024-07-25 10:15:07.858282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.830 [2024-07-25 10:15:07.967779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.805 10:15:08 blockdev_rbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.805 10:15:08 blockdev_rbd -- common/autotest_common.sh@862 -- # return 0 00:14:35.805 10:15:08 blockdev_rbd -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:14:35.805 10:15:08 blockdev_rbd -- bdev/blockdev.sh@719 -- # setup_rbd_conf 00:14:35.805 10:15:08 blockdev_rbd -- bdev/blockdev.sh@260 -- # timing_enter rbd_setup 00:14:35.805 10:15:08 blockdev_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:35.805 10:15:08 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:35.805 10:15:08 blockdev_rbd -- bdev/blockdev.sh@261 -- # rbd_setup 127.0.0.1 00:14:35.805 10:15:08 blockdev_rbd -- common/autotest_common.sh@1005 -- # '[' -z 127.0.0.1 ']' 00:14:35.805 10:15:08 blockdev_rbd -- common/autotest_common.sh@1009 -- # '[' -n '' ']' 00:14:35.805 10:15:08 blockdev_rbd -- common/autotest_common.sh@1018 -- # hash ceph 00:14:35.805 10:15:08 blockdev_rbd -- common/autotest_common.sh@1019 -- # export PG_NUM=128 00:14:35.805 10:15:08 blockdev_rbd -- common/autotest_common.sh@1019 -- # PG_NUM=128 00:14:35.805 10:15:08 blockdev_rbd -- common/autotest_common.sh@1020 -- # export RBD_POOL=rbd 00:14:35.805 10:15:08 blockdev_rbd -- common/autotest_common.sh@1020 -- # RBD_POOL=rbd 00:14:35.805 10:15:08 blockdev_rbd -- common/autotest_common.sh@1021 -- # export RBD_NAME=foo 00:14:35.805 10:15:08 blockdev_rbd -- common/autotest_common.sh@1021 -- # RBD_NAME=foo 00:14:35.805 10:15:08 blockdev_rbd -- common/autotest_common.sh@1022 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:14:35.805 + base_dir=/var/tmp/ceph 00:14:35.805 + image=/var/tmp/ceph/ceph_raw.img 00:14:35.805 + dev=/dev/loop200 00:14:35.805 + pkill -9 ceph 00:14:35.805 + sleep 3 00:14:39.081 + umount /dev/loop200p2 00:14:39.081 umount: /dev/loop200p2: no mount point specified. 00:14:39.081 + losetup -d /dev/loop200 00:14:39.081 losetup: /dev/loop200: detach failed: No such device or address 00:14:39.081 + rm -rf /var/tmp/ceph 00:14:39.081 10:15:11 blockdev_rbd -- common/autotest_common.sh@1023 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 127.0.0.1 00:14:39.081 + set -e 00:14:39.081 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:14:39.081 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:14:39.081 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:14:39.081 + base_dir=/var/tmp/ceph 00:14:39.081 + mon_ip=127.0.0.1 00:14:39.081 + mon_dir=/var/tmp/ceph/mon.a 00:14:39.081 + pid_dir=/var/tmp/ceph/pid 00:14:39.082 + ceph_conf=/var/tmp/ceph/ceph.conf 00:14:39.082 + mnt_dir=/var/tmp/ceph/mnt 00:14:39.082 + image=/var/tmp/ceph_raw.img 00:14:39.082 + dev=/dev/loop200 00:14:39.082 + modprobe loop 00:14:39.082 + umount /dev/loop200p2 00:14:39.082 umount: /dev/loop200p2: no mount point specified. 00:14:39.082 + true 00:14:39.082 + losetup -d /dev/loop200 00:14:39.082 losetup: /dev/loop200: detach failed: No such device or address 00:14:39.082 + true 00:14:39.082 + '[' -d /var/tmp/ceph ']' 00:14:39.082 + mkdir /var/tmp/ceph 00:14:39.082 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:14:39.082 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:14:39.082 + fallocate -l 4G /var/tmp/ceph_raw.img 00:14:39.082 + mknod /dev/loop200 b 7 200 00:14:39.082 mknod: /dev/loop200: File exists 00:14:39.082 + true 00:14:39.082 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:14:39.082 + PARTED='parted -s' 00:14:39.082 + SGDISK=sgdisk 00:14:39.082 Partitioning /dev/loop200 00:14:39.082 + echo 'Partitioning /dev/loop200' 00:14:39.082 + parted -s /dev/loop200 mktable gpt 00:14:39.082 + sleep 2 00:14:40.986 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:14:40.986 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:14:40.986 Setting name on /dev/loop200 00:14:40.986 + partno=0 00:14:40.986 + echo 'Setting name on /dev/loop200' 00:14:40.986 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:14:41.940 Warning: The kernel is still using the old partition table. 00:14:41.940 The new table will be used at the next reboot or after you 00:14:41.940 run partprobe(8) or kpartx(8) 00:14:41.940 The operation has completed successfully. 00:14:41.940 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:14:42.872 Warning: The kernel is still using the old partition table. 00:14:42.872 The new table will be used at the next reboot or after you 00:14:42.872 run partprobe(8) or kpartx(8) 00:14:42.872 The operation has completed successfully. 00:14:42.872 + kpartx /dev/loop200 00:14:42.872 loop200p1 : 0 4192256 /dev/loop200 2048 00:14:42.872 loop200p2 : 0 4192256 /dev/loop200 4194304 00:14:42.872 ++ ceph -v 00:14:42.872 ++ awk '{print $3}' 00:14:43.130 + ceph_version=17.2.7 00:14:43.130 + ceph_maj=17 00:14:43.130 + '[' 17 -gt 12 ']' 00:14:43.130 + update_config=true 00:14:43.130 + rm -f /var/log/ceph/ceph-mon.a.log 00:14:43.130 + set_min_mon_release='--set-min-mon-release 14' 00:14:43.130 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:14:43.130 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:14:43.130 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:14:43.130 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:14:43.130 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:14:43.130 = sectsz=512 attr=2, projid32bit=1 00:14:43.130 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:43.130 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:43.130 data = bsize=4096 blocks=524032, imaxpct=25 00:14:43.130 = sunit=0 swidth=0 blks 00:14:43.130 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:43.130 log =internal log bsize=4096 blocks=16384, version=2 00:14:43.130 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:43.130 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:43.130 Discarding blocks...Done. 00:14:43.130 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:14:43.130 + cat 00:14:43.130 + rm -rf '/var/tmp/ceph/mon.a/*' 00:14:43.130 + mkdir -p /var/tmp/ceph/mon.a 00:14:43.130 + mkdir -p /var/tmp/ceph/pid 00:14:43.130 + rm -f /etc/ceph/ceph.client.admin.keyring 00:14:43.130 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:14:43.130 creating /var/tmp/ceph/keyring 00:14:43.130 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:14:43.130 + monmaptool --create --clobber --add a 127.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:14:43.130 monmaptool: monmap file /var/tmp/ceph/monmap 00:14:43.130 monmaptool: generated fsid f538d891-a0d3-48cc-999e-3328d3eb0324 00:14:43.130 setting min_mon_release = octopus 00:14:43.130 epoch 0 00:14:43.130 fsid f538d891-a0d3-48cc-999e-3328d3eb0324 00:14:43.130 last_changed 2024-07-25T10:15:16.287106+0000 00:14:43.130 created 2024-07-25T10:15:16.287106+0000 00:14:43.130 min_mon_release 15 (octopus) 00:14:43.130 election_strategy: 1 00:14:43.131 0: v2:127.0.0.1:12046/0 mon.a 00:14:43.131 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:14:43.131 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:14:43.131 + '[' true = true ']' 00:14:43.131 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:14:43.131 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:14:43.131 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:14:43.131 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:14:43.131 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:14:43.131 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:14:43.131 ++ hostname 00:14:43.131 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:14:43.388 + true 00:14:43.388 + '[' true = true ']' 00:14:43.388 + ceph-conf --name mon.a --show-config-value log_file 00:14:43.388 /var/log/ceph/ceph-mon.a.log 00:14:43.388 ++ ceph -s 00:14:43.388 ++ grep id 00:14:43.388 ++ awk '{print $2}' 00:14:43.646 + fsid=f538d891-a0d3-48cc-999e-3328d3eb0324 00:14:43.646 + sed -i 's/perf = true/perf = true\n\tfsid = f538d891-a0d3-48cc-999e-3328d3eb0324 \n/g' /var/tmp/ceph/ceph.conf 00:14:43.646 + (( ceph_maj < 18 )) 00:14:43.646 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:14:43.646 + cat /var/tmp/ceph/ceph.conf 00:14:43.646 [global] 00:14:43.646 debug_lockdep = 0/0 00:14:43.646 debug_context = 0/0 00:14:43.646 debug_crush = 0/0 00:14:43.646 debug_buffer = 0/0 00:14:43.646 debug_timer = 0/0 00:14:43.646 debug_filer = 0/0 00:14:43.646 debug_objecter = 0/0 00:14:43.646 debug_rados = 0/0 00:14:43.646 debug_rbd = 0/0 00:14:43.646 debug_ms = 0/0 00:14:43.646 debug_monc = 0/0 00:14:43.646 debug_tp = 0/0 00:14:43.646 debug_auth = 0/0 00:14:43.646 debug_finisher = 0/0 00:14:43.646 debug_heartbeatmap = 0/0 00:14:43.646 debug_perfcounter = 0/0 00:14:43.646 debug_asok = 0/0 00:14:43.646 debug_throttle = 0/0 00:14:43.646 debug_mon = 0/0 00:14:43.646 debug_paxos = 0/0 00:14:43.646 debug_rgw = 0/0 00:14:43.646 00:14:43.646 perf = true 00:14:43.646 osd objectstore = filestore 00:14:43.646 00:14:43.646 fsid = f538d891-a0d3-48cc-999e-3328d3eb0324 00:14:43.646 00:14:43.646 mutex_perf_counter = false 00:14:43.646 throttler_perf_counter = false 00:14:43.646 rbd cache = false 00:14:43.646 mon_allow_pool_delete = true 00:14:43.646 00:14:43.646 osd_pool_default_size = 1 00:14:43.646 00:14:43.646 [mon] 00:14:43.646 mon_max_pool_pg_num=166496 00:14:43.646 mon_osd_max_split_count = 10000 00:14:43.646 mon_pg_warn_max_per_osd = 10000 00:14:43.646 00:14:43.646 [osd] 00:14:43.646 osd_op_threads = 64 00:14:43.646 filestore_queue_max_ops=5000 00:14:43.646 filestore_queue_committing_max_ops=5000 00:14:43.646 journal_max_write_entries=1000 00:14:43.646 journal_queue_max_ops=3000 00:14:43.646 objecter_inflight_ops=102400 00:14:43.646 filestore_wbthrottle_enable=false 00:14:43.646 filestore_queue_max_bytes=1048576000 00:14:43.646 filestore_queue_committing_max_bytes=1048576000 00:14:43.646 journal_max_write_bytes=1048576000 00:14:43.646 journal_queue_max_bytes=1048576000 00:14:43.646 ms_dispatch_throttle_bytes=1048576000 00:14:43.646 objecter_inflight_op_bytes=1048576000 00:14:43.646 filestore_max_sync_interval=10 00:14:43.646 osd_client_message_size_cap = 0 00:14:43.646 osd_client_message_cap = 0 00:14:43.646 osd_enable_op_tracker = false 00:14:43.646 filestore_fd_cache_size = 10240 00:14:43.646 filestore_fd_cache_shards = 64 00:14:43.646 filestore_op_threads = 16 00:14:43.646 osd_op_num_shards = 48 00:14:43.646 osd_op_num_threads_per_shard = 2 00:14:43.646 osd_pg_object_context_cache_count = 10240 00:14:43.646 filestore_odsync_write = True 00:14:43.646 journal_dynamic_throttle = True 00:14:43.646 00:14:43.646 [osd.0] 00:14:43.646 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:14:43.646 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:14:43.646 00:14:43.646 # add mon address 00:14:43.646 [mon.a] 00:14:43.646 mon addr = v2:127.0.0.1:12046 00:14:43.646 + i=0 00:14:43.646 + mkdir -p /var/tmp/ceph/mnt 00:14:43.646 ++ uuidgen 00:14:43.646 + uuid=99151757-ab59-4f18-bffb-bf1fdba9a12e 00:14:43.646 + ceph -c /var/tmp/ceph/ceph.conf osd create 99151757-ab59-4f18-bffb-bf1fdba9a12e 0 00:14:43.904 0 00:14:43.904 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid 99151757-ab59-4f18-bffb-bf1fdba9a12e --check-needs-journal --no-mon-config 00:14:44.162 2024-07-25T10:15:17.191+0000 7f625406a400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:14:44.162 2024-07-25T10:15:17.192+0000 7f625406a400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:14:44.162 2024-07-25T10:15:17.245+0000 7f625406a400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 99151757-ab59-4f18-bffb-bf1fdba9a12e, invalid (someone else's?) journal 00:14:44.162 2024-07-25T10:15:17.283+0000 7f625406a400 -1 journal do_read_entry(4096): bad header magic 00:14:44.162 2024-07-25T10:15:17.283+0000 7f625406a400 -1 journal do_read_entry(4096): bad header magic 00:14:44.162 ++ hostname 00:14:44.162 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:14:45.534 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:14:45.534 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:14:45.534 added key for osd.0 00:14:45.856 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:14:45.856 + class_dir=/lib64/rados-classes 00:14:45.856 + [[ -e /lib64/rados-classes ]] 00:14:45.856 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:14:46.114 + pkill -9 ceph-osd 00:14:46.371 + true 00:14:46.371 + sleep 2 00:14:48.267 + mkdir -p /var/tmp/ceph/pid 00:14:48.267 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:14:48.267 2024-07-25T10:15:21.427+0000 7f15df038400 -1 Falling back to public interface 00:14:48.267 2024-07-25T10:15:21.481+0000 7f15df038400 -1 journal do_read_entry(8192): bad header magic 00:14:48.267 2024-07-25T10:15:21.481+0000 7f15df038400 -1 journal do_read_entry(8192): bad header magic 00:14:48.267 2024-07-25T10:15:21.491+0000 7f15df038400 -1 osd.0 0 log_to_monitors true 00:14:49.640 10:15:22 blockdev_rbd -- common/autotest_common.sh@1025 -- # ceph osd pool create rbd 128 00:14:50.573 pool 'rbd' created 00:14:50.573 10:15:23 blockdev_rbd -- common/autotest_common.sh@1026 -- # rbd create foo --size 1000 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@262 -- # timing_exit rbd_setup 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@264 -- # rpc_cmd bdev_rbd_create -b Ceph0 rbd foo 512 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:54.755 [2024-07-25 10:15:27.682137] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:14:54.755 WARNING:bdev_rbd_create should be used with specifying -c to have a cluster name after bdev_rbd_register_cluster. 00:14:54.755 Ceph0 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@739 -- # cat 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "a2b1c933-f28d-43ec-ae7e-7d3671bb8809"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "a2b1c933-f28d-43ec-ae7e-7d3671bb8809",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@748 -- # jq -r .name 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@751 -- # hello_world_bdev=Ceph0 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:14:54.755 10:15:27 blockdev_rbd -- bdev/blockdev.sh@753 -- # killprocess 75755 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@948 -- # '[' -z 75755 ']' 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@952 -- # kill -0 75755 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@953 -- # uname 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75755 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:54.755 killing process with pid 75755 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75755' 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@967 -- # kill 75755 00:14:54.755 10:15:27 blockdev_rbd -- common/autotest_common.sh@972 -- # wait 75755 00:14:55.012 10:15:28 blockdev_rbd -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:55.012 10:15:28 blockdev_rbd -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Ceph0 '' 00:14:55.012 10:15:28 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:14:55.012 10:15:28 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.012 10:15:28 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:55.270 ************************************ 00:14:55.270 START TEST bdev_hello_world 00:14:55.270 ************************************ 00:14:55.270 10:15:28 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Ceph0 '' 00:14:55.270 [2024-07-25 10:15:28.371110] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:14:55.270 [2024-07-25 10:15:28.371253] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76628 ] 00:14:55.270 [2024-07-25 10:15:28.512335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.528 [2024-07-25 10:15:28.653674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.786 [2024-07-25 10:15:28.826874] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:14:55.786 [2024-07-25 10:15:28.844271] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:55.786 [2024-07-25 10:15:28.844345] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Ceph0 00:14:55.786 [2024-07-25 10:15:28.844380] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:55.786 [2024-07-25 10:15:28.851450] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:55.786 [2024-07-25 10:15:28.869942] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:55.786 [2024-07-25 10:15:28.870006] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:55.786 [2024-07-25 10:15:28.890998] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:55.786 00:14:55.786 [2024-07-25 10:15:28.891094] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:56.095 00:14:56.095 real 0m0.851s 00:14:56.095 user 0m0.548s 00:14:56.095 sys 0m0.177s 00:14:56.095 10:15:29 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:56.095 10:15:29 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:56.095 ************************************ 00:14:56.095 END TEST bdev_hello_world 00:14:56.095 ************************************ 00:14:56.095 10:15:29 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:14:56.095 10:15:29 blockdev_rbd -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:14:56.095 10:15:29 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:56.095 10:15:29 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.095 10:15:29 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:56.095 ************************************ 00:14:56.095 START TEST bdev_bounds 00:14:56.095 ************************************ 00:14:56.095 Process bdevio pid: 76677 00:14:56.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.095 10:15:29 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:14:56.095 10:15:29 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=76677 00:14:56.095 10:15:29 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:56.095 10:15:29 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:56.095 10:15:29 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 76677' 00:14:56.095 10:15:29 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 76677 00:14:56.095 10:15:29 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 76677 ']' 00:14:56.095 10:15:29 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.095 10:15:29 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.095 10:15:29 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.095 10:15:29 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.095 10:15:29 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:56.095 [2024-07-25 10:15:29.271498] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:14:56.095 [2024-07-25 10:15:29.272859] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76677 ] 00:14:56.353 [2024-07-25 10:15:29.419253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:56.353 [2024-07-25 10:15:29.558692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.353 [2024-07-25 10:15:29.558834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.353 [2024-07-25 10:15:29.558854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.610 [2024-07-25 10:15:29.748797] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:14:57.174 10:15:30 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.174 10:15:30 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:14:57.174 10:15:30 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:57.432 I/O targets: 00:14:57.432 Ceph0: 2048000 blocks of 512 bytes (1000 MiB) 00:14:57.432 00:14:57.432 00:14:57.432 CUnit - A unit testing framework for C - Version 2.1-3 00:14:57.432 http://cunit.sourceforge.net/ 00:14:57.432 00:14:57.432 00:14:57.432 Suite: bdevio tests on: Ceph0 00:14:57.432 Test: blockdev write read block ...passed 00:14:57.432 Test: blockdev write zeroes read block ...passed 00:14:57.432 Test: blockdev write zeroes read no split ...passed 00:14:57.432 Test: blockdev write zeroes read split ...passed 00:14:57.432 Test: blockdev write zeroes read split partial ...passed 00:14:57.432 Test: blockdev reset ...passed 00:14:57.432 Test: blockdev write read 8 blocks ...passed 00:14:57.432 Test: blockdev write read size > 128k ...passed 00:14:57.432 Test: blockdev write read invalid size ...passed 00:14:57.432 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:57.432 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:57.432 Test: blockdev write read max offset ...passed 00:14:57.432 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:57.432 Test: blockdev writev readv 8 blocks ...passed 00:14:57.432 Test: blockdev writev readv 30 x 1block ...passed 00:14:57.432 Test: blockdev writev readv block ...passed 00:14:57.432 Test: blockdev writev readv size > 128k ...passed 00:14:57.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:57.690 Test: blockdev comparev and writev ...passed 00:14:57.690 Test: blockdev nvme passthru rw ...passed 00:14:57.690 Test: blockdev nvme passthru vendor specific ...passed 00:14:57.690 Test: blockdev nvme admin passthru ...passed 00:14:57.690 Test: blockdev copy ...passed 00:14:57.690 00:14:57.690 Run Summary: Type Total Ran Passed Failed Inactive 00:14:57.690 suites 1 1 n/a 0 0 00:14:57.690 tests 23 23 23 0 0 00:14:57.690 asserts 130 130 130 0 n/a 00:14:57.690 00:14:57.690 Elapsed time = 0.407 seconds 00:14:57.690 0 00:14:57.690 10:15:30 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 76677 00:14:57.690 10:15:30 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 76677 ']' 00:14:57.690 10:15:30 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 76677 00:14:57.690 10:15:30 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:14:57.690 10:15:30 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:57.690 10:15:30 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76677 00:14:57.690 killing process with pid 76677 00:14:57.690 10:15:30 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:57.690 10:15:30 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:57.690 10:15:30 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76677' 00:14:57.690 10:15:30 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@967 -- # kill 76677 00:14:57.690 10:15:30 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@972 -- # wait 76677 00:14:57.947 ************************************ 00:14:57.947 END TEST bdev_bounds 00:14:57.947 ************************************ 00:14:57.947 10:15:30 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:57.947 00:14:57.947 real 0m1.820s 00:14:57.947 user 0m4.672s 00:14:57.947 sys 0m0.318s 00:14:57.947 10:15:31 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:57.947 10:15:31 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:57.947 10:15:31 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:14:57.947 10:15:31 blockdev_rbd -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Ceph0 '' 00:14:57.947 10:15:31 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:57.947 10:15:31 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.947 10:15:31 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:57.947 ************************************ 00:14:57.947 START TEST bdev_nbd 00:14:57.947 ************************************ 00:14:57.947 10:15:31 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Ceph0 '' 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Ceph0') 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Ceph0') 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:57.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=76744 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 76744 /var/tmp/spdk-nbd.sock 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 76744 ']' 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.948 10:15:31 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:57.948 [2024-07-25 10:15:31.135615] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:14:57.948 [2024-07-25 10:15:31.136103] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.206 [2024-07-25 10:15:31.277017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.206 [2024-07-25 10:15:31.403644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.496 [2024-07-25 10:15:31.573751] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Ceph0 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Ceph0') 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Ceph0 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Ceph0') 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Ceph0 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:59.062 10:15:32 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.062 1+0 records in 00:14:59.062 1+0 records out 00:14:59.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112475 s, 3.6 MB/s 00:14:59.063 10:15:32 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.063 10:15:32 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:59.063 10:15:32 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.063 10:15:32 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:59.063 10:15:32 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:59.063 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:59.063 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:14:59.321 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:59.321 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:59.321 { 00:14:59.321 "nbd_device": "/dev/nbd0", 00:14:59.321 "bdev_name": "Ceph0" 00:14:59.321 } 00:14:59.321 ]' 00:14:59.321 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:59.321 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:59.321 { 00:14:59.321 "nbd_device": "/dev/nbd0", 00:14:59.321 "bdev_name": "Ceph0" 00:14:59.321 } 00:14:59.321 ]' 00:14:59.321 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:59.580 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:59.580 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:59.580 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:59.580 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:59.580 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:59.580 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.580 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:59.838 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:59.838 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:59.838 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:59.838 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.838 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.838 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:59.838 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:59.838 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.838 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:59.838 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:59.838 10:15:32 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:00.096 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:00.096 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:00.096 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:00.096 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:00.096 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:00.096 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:00.096 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Ceph0 /dev/nbd0 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Ceph0') 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Ceph0 /dev/nbd0 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Ceph0') 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.353 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Ceph0 /dev/nbd0 00:15:00.610 /dev/nbd0 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.610 1+0 records in 00:15:00.610 1+0 records out 00:15:00.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100095 s, 4.1 MB/s 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:00.610 10:15:33 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:00.868 { 00:15:00.868 "nbd_device": "/dev/nbd0", 00:15:00.868 "bdev_name": "Ceph0" 00:15:00.868 } 00:15:00.868 ]' 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:00.868 { 00:15:00.868 "nbd_device": "/dev/nbd0", 00:15:00.868 "bdev_name": "Ceph0" 00:15:00.868 } 00:15:00.868 ]' 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:00.868 256+0 records in 00:15:00.868 256+0 records out 00:15:00.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0066842 s, 157 MB/s 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:00.868 10:15:34 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:02.246 256+0 records in 00:15:02.246 256+0 records out 00:15:02.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 1.29882 s, 807 kB/s 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:02.246 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:02.812 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:02.812 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:02.812 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:02.812 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:02.812 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:02.812 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:02.812 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:02.812 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:02.812 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:02.812 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:02.812 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:02.812 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:02.812 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:02.812 10:15:35 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:02.812 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:02.812 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:02.812 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:02.812 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:02.812 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:02.812 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:02.812 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:02.812 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:02.812 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:02.812 10:15:36 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:02.812 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:02.812 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:15:02.812 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:15:02.812 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:15:02.812 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:03.070 malloc_lvol_verify 00:15:03.070 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:03.328 1ebfdfc3-c1f6-4ca9-8805-a443fb9c6d6b 00:15:03.328 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:03.586 3417d4c7-d20d-4395-9ab7-3894173affbf 00:15:03.586 10:15:36 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:04.151 /dev/nbd0 00:15:04.151 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:15:04.151 mke2fs 1.46.5 (30-Dec-2021) 00:15:04.151 Discarding device blocks: 0/4096 done 00:15:04.151 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:04.151 00:15:04.151 Allocating group tables: 0/1 done 00:15:04.151 Writing inode tables: 0/1 done 00:15:04.151 Creating journal (1024 blocks): done 00:15:04.151 Writing superblocks and filesystem accounting information: 0/1 done 00:15:04.151 00:15:04.152 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:15:04.152 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:04.152 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:04.152 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:04.152 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:04.152 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:04.152 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.152 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:04.152 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:04.152 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:04.152 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:04.152 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.152 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.152 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 76744 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 76744 ']' 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 76744 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76744 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:04.409 killing process with pid 76744 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76744' 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@967 -- # kill 76744 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@972 -- # wait 76744 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:04.409 00:15:04.409 real 0m6.609s 00:15:04.409 user 0m8.954s 00:15:04.409 sys 0m1.968s 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:04.409 10:15:37 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:04.409 ************************************ 00:15:04.409 END TEST bdev_nbd 00:15:04.409 ************************************ 00:15:04.667 10:15:37 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:15:04.667 10:15:37 blockdev_rbd -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:04.667 10:15:37 blockdev_rbd -- bdev/blockdev.sh@763 -- # '[' rbd = nvme ']' 00:15:04.667 10:15:37 blockdev_rbd -- bdev/blockdev.sh@763 -- # '[' rbd = gpt ']' 00:15:04.667 10:15:37 blockdev_rbd -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:15:04.667 10:15:37 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:04.667 10:15:37 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.667 10:15:37 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:15:04.667 ************************************ 00:15:04.667 START TEST bdev_fio 00:15:04.667 ************************************ 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:04.667 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Ceph0]' 00:15:04.667 10:15:37 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Ceph0 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:04.668 ************************************ 00:15:04.668 START TEST bdev_fio_rw_verify 00:15:04.668 ************************************ 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:04.668 10:15:37 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:04.925 job_Ceph0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:04.925 fio-3.35 00:15:04.925 Starting 1 thread 00:15:17.184 00:15:17.184 job_Ceph0: (groupid=0, jobs=1): err= 0: pid=76996: Thu Jul 25 10:15:48 2024 00:15:17.184 read: IOPS=611, BW=2444KiB/s (2503kB/s)(24.0MiB/10055msec) 00:15:17.184 slat (usec): min=3, max=546, avg=11.18, stdev=20.76 00:15:17.184 clat (usec): min=176, max=486663, avg=3436.30, stdev=28973.70 00:15:17.184 lat (usec): min=186, max=486666, avg=3447.48, stdev=28973.42 00:15:17.184 clat percentiles (usec): 00:15:17.184 | 50.000th=[ 537], 99.000th=[ 79168], 99.900th=[383779], 00:15:17.184 | 99.990th=[488637], 99.999th=[488637] 00:15:17.184 write: IOPS=663, BW=2655KiB/s (2719kB/s)(26.1MiB/10055msec); 0 zone resets 00:15:17.184 slat (usec): min=13, max=1750, avg=21.82, stdev=27.31 00:15:17.184 clat (usec): min=1799, max=325107, avg=8826.13, stdev=20485.91 00:15:17.184 lat (usec): min=1826, max=325137, avg=8847.95, stdev=20487.74 00:15:17.184 clat percentiles (msec): 00:15:17.184 | 50.000th=[ 5], 99.000th=[ 104], 99.900th=[ 321], 99.990th=[ 326], 00:15:17.184 | 99.999th=[ 326] 00:15:17.184 bw ( KiB/s): min= 352, max= 5472, per=100.00%, avg=2806.84, stdev=1796.10, samples=19 00:15:17.184 iops : min= 88, max= 1368, avg=701.68, stdev=449.02, samples=19 00:15:17.184 lat (usec) : 250=0.49%, 500=19.42%, 750=20.91%, 1000=4.66% 00:15:17.184 lat (msec) : 2=1.28%, 4=10.16%, 10=39.52%, 20=0.68%, 50=0.35% 00:15:17.184 lat (msec) : 100=1.56%, 250=0.63%, 500=0.34% 00:15:17.184 cpu : usr=98.74%, sys=0.19%, ctx=1081, majf=0, minf=43 00:15:17.184 IO depths : 1=0.1%, 2=0.1%, 4=22.8%, 8=77.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:17.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.184 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.184 issued rwts: total=6144,6675,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.184 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:17.184 00:15:17.184 Run status group 0 (all jobs): 00:15:17.184 READ: bw=2444KiB/s (2503kB/s), 2444KiB/s-2444KiB/s (2503kB/s-2503kB/s), io=24.0MiB (25.2MB), run=10055-10055msec 00:15:17.184 WRITE: bw=2655KiB/s (2719kB/s), 2655KiB/s-2655KiB/s (2719kB/s-2719kB/s), io=26.1MiB (27.3MB), run=10055-10055msec 00:15:17.184 00:15:17.184 real 0m11.117s 00:15:17.184 user 0m11.279s 00:15:17.184 sys 0m0.718s 00:15:17.184 10:15:48 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:17.184 10:15:48 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:17.185 ************************************ 00:15:17.185 END TEST bdev_fio_rw_verify 00:15:17.185 ************************************ 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "a2b1c933-f28d-43ec-ae7e-7d3671bb8809"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "a2b1c933-f28d-43ec-ae7e-7d3671bb8809",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:15:17.185 10:15:48 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Ceph0 ]] 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "a2b1c933-f28d-43ec-ae7e-7d3671bb8809"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "a2b1c933-f28d-43ec-ae7e-7d3671bb8809",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Ceph0]' 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Ceph0 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:17.185 ************************************ 00:15:17.185 START TEST bdev_fio_trim 00:15:17.185 ************************************ 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:17.185 10:15:49 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:17.185 job_Ceph0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:17.185 fio-3.35 00:15:17.185 Starting 1 thread 00:15:27.152 00:15:27.152 job_Ceph0: (groupid=0, jobs=1): err= 0: pid=77182: Thu Jul 25 10:15:59 2024 00:15:27.152 write: IOPS=829, BW=3320KiB/s (3399kB/s)(32.4MiB/10008msec); 0 zone resets 00:15:27.152 slat (usec): min=3, max=812, avg=17.04, stdev=42.12 00:15:27.152 clat (usec): min=1961, max=227209, avg=9523.26, stdev=11838.04 00:15:27.152 lat (usec): min=1971, max=227213, avg=9540.31, stdev=11838.68 00:15:27.152 clat percentiles (msec): 00:15:27.152 | 50.000th=[ 9], 99.000th=[ 22], 99.900th=[ 226], 99.990th=[ 228], 00:15:27.152 | 99.999th=[ 228] 00:15:27.152 bw ( KiB/s): min= 1912, max= 5136, per=99.98%, avg=3319.60, stdev=863.34, samples=20 00:15:27.152 iops : min= 478, max= 1284, avg=829.90, stdev=215.84, samples=20 00:15:27.152 trim: IOPS=829, BW=3320KiB/s (3399kB/s)(32.4MiB/10008msec); 0 zone resets 00:15:27.152 slat (usec): min=2, max=558, avg= 9.37, stdev=25.44 00:15:27.152 clat (usec): min=2, max=9164, avg=83.19, stdev=209.45 00:15:27.152 lat (usec): min=10, max=9249, avg=92.56, stdev=210.46 00:15:27.152 clat percentiles (usec): 00:15:27.152 | 50.000th=[ 61], 99.000th=[ 322], 99.900th=[ 693], 99.990th=[ 9110], 00:15:27.152 | 99.999th=[ 9110] 00:15:27.152 bw ( KiB/s): min= 1912, max= 5200, per=100.00%, avg=3322.40, stdev=867.50, samples=20 00:15:27.152 iops : min= 478, max= 1300, avg=830.60, stdev=216.88, samples=20 00:15:27.152 lat (usec) : 4=0.31%, 10=1.14%, 20=5.86%, 50=14.41%, 100=13.98% 00:15:27.152 lat (usec) : 250=13.04%, 500=1.16%, 750=0.05%, 1000=0.02% 00:15:27.152 lat (msec) : 2=0.01%, 4=3.26%, 10=28.83%, 20=17.34%, 50=0.25% 00:15:27.152 lat (msec) : 100=0.14%, 250=0.20% 00:15:27.152 cpu : usr=98.15%, sys=0.24%, ctx=1659, majf=0, minf=14 00:15:27.152 IO depths : 1=0.1%, 2=0.2%, 4=10.5%, 8=89.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:27.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.152 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.152 issued rwts: total=0,8306,8306,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.152 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:27.152 00:15:27.152 Run status group 0 (all jobs): 00:15:27.152 WRITE: bw=3320KiB/s (3399kB/s), 3320KiB/s-3320KiB/s (3399kB/s-3399kB/s), io=32.4MiB (34.0MB), run=10008-10008msec 00:15:27.152 TRIM: bw=3320KiB/s (3399kB/s), 3320KiB/s-3320KiB/s (3399kB/s-3399kB/s), io=32.4MiB (34.0MB), run=10008-10008msec 00:15:27.152 00:15:27.152 real 0m10.956s 00:15:27.152 user 0m11.077s 00:15:27.152 sys 0m0.615s 00:15:27.152 10:16:00 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:27.152 10:16:00 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:15:27.152 ************************************ 00:15:27.152 END TEST bdev_fio_trim 00:15:27.152 ************************************ 00:15:27.152 10:16:00 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:15:27.152 10:16:00 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:15:27.152 10:16:00 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:27.152 10:16:00 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:15:27.152 /home/vagrant/spdk_repo/spdk 00:15:27.152 10:16:00 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:15:27.152 00:15:27.152 real 0m22.368s 00:15:27.152 user 0m22.492s 00:15:27.152 sys 0m1.484s 00:15:27.152 10:16:00 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:27.152 ************************************ 00:15:27.152 END TEST bdev_fio 00:15:27.152 ************************************ 00:15:27.152 10:16:00 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:27.152 10:16:00 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:15:27.152 10:16:00 blockdev_rbd -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:27.152 10:16:00 blockdev_rbd -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:27.153 10:16:00 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:15:27.153 10:16:00 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.153 10:16:00 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:15:27.153 ************************************ 00:15:27.153 START TEST bdev_verify 00:15:27.153 ************************************ 00:15:27.153 10:16:00 blockdev_rbd.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:27.153 [2024-07-25 10:16:00.207384] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:15:27.153 [2024-07-25 10:16:00.207506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77321 ] 00:15:27.153 [2024-07-25 10:16:00.348343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:27.411 [2024-07-25 10:16:00.467305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.411 [2024-07-25 10:16:00.467312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.964 [2024-07-25 10:16:06.053501] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:15:33.964 Running I/O for 5 seconds... 00:15:38.148 00:15:38.148 Latency(us) 00:15:38.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.148 Job: Ceph0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:38.148 Verification LBA range: start 0x0 length 0x1f400 00:15:38.148 Ceph0 : 5.03 2404.36 9.39 0.00 0.00 52937.09 1966.08 527283.93 00:15:38.148 Job: Ceph0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:38.148 Verification LBA range: start 0x1f400 length 0x1f400 00:15:38.148 Ceph0 : 5.02 2428.54 9.49 0.00 0.00 52556.35 2200.14 782936.75 00:15:38.148 =================================================================================================================== 00:15:38.148 Total : 4832.90 18.88 0.00 0.00 52745.95 1966.08 782936.75 00:15:38.148 00:15:38.148 real 0m11.178s 00:15:38.148 user 0m17.484s 00:15:38.148 sys 0m0.755s 00:15:38.148 10:16:11 blockdev_rbd.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:38.148 ************************************ 00:15:38.148 END TEST bdev_verify 00:15:38.148 ************************************ 00:15:38.148 10:16:11 blockdev_rbd.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:38.148 10:16:11 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:15:38.148 10:16:11 blockdev_rbd -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:38.148 10:16:11 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:15:38.148 10:16:11 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:38.148 10:16:11 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:15:38.148 ************************************ 00:15:38.148 START TEST bdev_verify_big_io 00:15:38.148 ************************************ 00:15:38.148 10:16:11 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:38.405 [2024-07-25 10:16:11.419190] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:15:38.405 [2024-07-25 10:16:11.419287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77476 ] 00:15:38.405 [2024-07-25 10:16:11.555664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:38.405 [2024-07-25 10:16:11.655352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.405 [2024-07-25 10:16:11.655359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.662 [2024-07-25 10:16:11.823532] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:15:38.662 Running I/O for 5 seconds... 00:15:43.927 00:15:43.927 Latency(us) 00:15:43.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.927 Job: Ceph0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:43.927 Verification LBA range: start 0x0 length 0x1f40 00:15:43.927 Ceph0 : 5.14 585.44 36.59 0.00 0.00 213135.14 9861.61 493330.04 00:15:43.927 Job: Ceph0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:43.927 Verification LBA range: start 0x1f40 length 0x1f40 00:15:43.927 Ceph0 : 5.15 546.48 34.16 0.00 0.00 228317.68 4930.80 493330.04 00:15:43.927 =================================================================================================================== 00:15:43.927 Total : 1131.92 70.74 0.00 0.00 220472.45 4930.80 493330.04 00:15:44.186 00:15:44.186 real 0m5.842s 00:15:44.186 user 0m11.463s 00:15:44.186 sys 0m0.714s 00:15:44.186 10:16:17 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:44.186 10:16:17 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.186 ************************************ 00:15:44.186 END TEST bdev_verify_big_io 00:15:44.186 ************************************ 00:15:44.186 10:16:17 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:15:44.186 10:16:17 blockdev_rbd -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:44.186 10:16:17 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:44.186 10:16:17 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:44.186 10:16:17 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:15:44.186 ************************************ 00:15:44.186 START TEST bdev_write_zeroes 00:15:44.186 ************************************ 00:15:44.186 10:16:17 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:44.186 [2024-07-25 10:16:17.341175] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:15:44.186 [2024-07-25 10:16:17.341277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77576 ] 00:15:44.445 [2024-07-25 10:16:17.484427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.445 [2024-07-25 10:16:17.587663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.718 [2024-07-25 10:16:17.760776] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:15:44.718 Running I/O for 1 seconds... 00:15:46.129 00:15:46.129 Latency(us) 00:15:46.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.129 Job: Ceph0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:46.129 Ceph0 : 1.46 3650.86 14.26 0.00 0.00 34931.16 4306.65 515300.21 00:15:46.129 =================================================================================================================== 00:15:46.129 Total : 3650.86 14.26 0.00 0.00 34931.16 4306.65 515300.21 00:15:46.387 00:15:46.387 real 0m2.200s 00:15:46.387 user 0m2.084s 00:15:46.387 sys 0m0.219s 00:15:46.387 10:16:19 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:46.387 10:16:19 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:46.387 ************************************ 00:15:46.387 END TEST bdev_write_zeroes 00:15:46.387 ************************************ 00:15:46.387 10:16:19 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:15:46.387 10:16:19 blockdev_rbd -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:46.387 10:16:19 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:46.387 10:16:19 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.387 10:16:19 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:15:46.387 ************************************ 00:15:46.387 START TEST bdev_json_nonenclosed 00:15:46.387 ************************************ 00:15:46.387 10:16:19 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:46.387 [2024-07-25 10:16:19.587570] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:15:46.387 [2024-07-25 10:16:19.587658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77636 ] 00:15:46.645 [2024-07-25 10:16:19.728451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.645 [2024-07-25 10:16:19.835280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.645 [2024-07-25 10:16:19.835358] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:46.645 [2024-07-25 10:16:19.835372] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:46.645 [2024-07-25 10:16:19.835382] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:46.904 00:15:46.904 real 0m0.405s 00:15:46.904 user 0m0.229s 00:15:46.904 sys 0m0.071s 00:15:46.904 10:16:19 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:15:46.904 10:16:19 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:46.904 10:16:19 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:46.904 ************************************ 00:15:46.904 END TEST bdev_json_nonenclosed 00:15:46.904 ************************************ 00:15:46.904 10:16:19 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 234 00:15:46.904 10:16:19 blockdev_rbd -- bdev/blockdev.sh@781 -- # true 00:15:46.904 10:16:19 blockdev_rbd -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:46.904 10:16:19 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:46.904 10:16:19 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.904 10:16:19 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:15:46.904 ************************************ 00:15:46.904 START TEST bdev_json_nonarray 00:15:46.904 ************************************ 00:15:46.904 10:16:19 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:46.904 [2024-07-25 10:16:20.055045] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:15:46.905 [2024-07-25 10:16:20.055128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77659 ] 00:15:47.163 [2024-07-25 10:16:20.193815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.163 [2024-07-25 10:16:20.313779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.163 [2024-07-25 10:16:20.313889] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:47.163 [2024-07-25 10:16:20.313907] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:47.163 [2024-07-25 10:16:20.313921] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:47.422 00:15:47.422 real 0m0.436s 00:15:47.422 user 0m0.264s 00:15:47.422 sys 0m0.069s 00:15:47.422 10:16:20 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:15:47.422 10:16:20 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:47.422 ************************************ 00:15:47.422 END TEST bdev_json_nonarray 00:15:47.422 ************************************ 00:15:47.422 10:16:20 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 10:16:20 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 234 00:15:47.422 10:16:20 blockdev_rbd -- bdev/blockdev.sh@784 -- # true 00:15:47.422 10:16:20 blockdev_rbd -- bdev/blockdev.sh@786 -- # [[ rbd == bdev ]] 00:15:47.422 10:16:20 blockdev_rbd -- bdev/blockdev.sh@793 -- # [[ rbd == gpt ]] 00:15:47.422 10:16:20 blockdev_rbd -- bdev/blockdev.sh@797 -- # [[ rbd == crypto_sw ]] 00:15:47.422 10:16:20 blockdev_rbd -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:15:47.422 10:16:20 blockdev_rbd -- bdev/blockdev.sh@810 -- # cleanup 00:15:47.422 10:16:20 blockdev_rbd -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:47.422 10:16:20 blockdev_rbd -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:47.422 10:16:20 blockdev_rbd -- bdev/blockdev.sh@26 -- # [[ rbd == rbd ]] 00:15:47.422 10:16:20 blockdev_rbd -- bdev/blockdev.sh@27 -- # rbd_cleanup 00:15:47.422 10:16:20 blockdev_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:15:47.422 10:16:20 blockdev_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:15:47.422 + base_dir=/var/tmp/ceph 00:15:47.422 + image=/var/tmp/ceph/ceph_raw.img 00:15:47.422 + dev=/dev/loop200 00:15:47.422 + pkill -9 ceph 00:15:47.422 + sleep 3 00:15:50.704 + umount /dev/loop200p2 00:15:50.704 + losetup -d /dev/loop200 00:15:50.704 + rm -rf /var/tmp/ceph 00:15:50.704 10:16:23 blockdev_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:15:50.962 10:16:24 blockdev_rbd -- bdev/blockdev.sh@30 -- # [[ rbd == daos ]] 00:15:50.962 10:16:24 blockdev_rbd -- bdev/blockdev.sh@34 -- # [[ rbd = \g\p\t ]] 00:15:50.962 10:16:24 blockdev_rbd -- bdev/blockdev.sh@40 -- # [[ rbd == xnvme ]] 00:15:50.962 00:15:50.962 real 1m16.509s 00:15:50.962 user 1m31.282s 00:15:50.962 sys 0m7.711s 00:15:50.962 10:16:24 blockdev_rbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:50.962 10:16:24 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:15:50.962 ************************************ 00:15:50.962 END TEST blockdev_rbd 00:15:50.962 ************************************ 00:15:50.962 10:16:24 -- common/autotest_common.sh@1142 -- # return 0 00:15:50.962 10:16:24 -- spdk/autotest.sh@332 -- # run_test spdkcli_rbd /home/vagrant/spdk_repo/spdk/test/spdkcli/rbd.sh 00:15:50.962 10:16:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:50.962 10:16:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.962 10:16:24 -- common/autotest_common.sh@10 -- # set +x 00:15:50.962 ************************************ 00:15:50.962 START TEST spdkcli_rbd 00:15:50.962 ************************************ 00:15:50.962 10:16:24 spdkcli_rbd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/rbd.sh 00:15:50.962 * Looking for test storage... 00:15:50.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:15:50.962 10:16:24 spdkcli_rbd -- spdkcli/rbd.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:15:50.962 10:16:24 spdkcli_rbd -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:15:50.962 10:16:24 spdkcli_rbd -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:15:50.962 10:16:24 spdkcli_rbd -- spdkcli/rbd.sh@11 -- # MATCH_FILE=spdkcli_rbd.test 00:15:50.962 10:16:24 spdkcli_rbd -- spdkcli/rbd.sh@12 -- # SPDKCLI_BRANCH=/bdevs/rbd 00:15:50.962 10:16:24 spdkcli_rbd -- spdkcli/rbd.sh@14 -- # trap 'rbd_cleanup; cleanup' EXIT 00:15:50.962 10:16:24 spdkcli_rbd -- spdkcli/rbd.sh@15 -- # timing_enter run_spdk_tgt 00:15:50.962 10:16:24 spdkcli_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:50.962 10:16:24 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:15:50.962 10:16:24 spdkcli_rbd -- spdkcli/rbd.sh@16 -- # run_spdk_tgt 00:15:50.962 10:16:24 spdkcli_rbd -- spdkcli/common.sh@27 -- # spdk_tgt_pid=77771 00:15:50.962 10:16:24 spdkcli_rbd -- spdkcli/common.sh@28 -- # waitforlisten 77771 00:15:50.962 10:16:24 spdkcli_rbd -- common/autotest_common.sh@829 -- # '[' -z 77771 ']' 00:15:50.962 10:16:24 spdkcli_rbd -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:15:50.962 10:16:24 spdkcli_rbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.962 10:16:24 spdkcli_rbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.962 10:16:24 spdkcli_rbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.962 10:16:24 spdkcli_rbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.962 10:16:24 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:15:51.220 [2024-07-25 10:16:24.275229] Starting SPDK v24.09-pre git sha1 c5d7cded4 / DPDK 24.03.0 initialization... 00:15:51.220 [2024-07-25 10:16:24.275327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77771 ] 00:15:51.220 [2024-07-25 10:16:24.422805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:51.478 [2024-07-25 10:16:24.542448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.478 [2024-07-25 10:16:24.542459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.046 10:16:25 spdkcli_rbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.046 10:16:25 spdkcli_rbd -- common/autotest_common.sh@862 -- # return 0 00:15:52.046 10:16:25 spdkcli_rbd -- spdkcli/rbd.sh@17 -- # timing_exit run_spdk_tgt 00:15:52.046 10:16:25 spdkcli_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:52.046 10:16:25 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:15:52.046 10:16:25 spdkcli_rbd -- spdkcli/rbd.sh@19 -- # timing_enter spdkcli_create_rbd_config 00:15:52.046 10:16:25 spdkcli_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:52.046 10:16:25 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:15:52.046 10:16:25 spdkcli_rbd -- spdkcli/rbd.sh@20 -- # rbd_cleanup 00:15:52.046 10:16:25 spdkcli_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:15:52.046 10:16:25 spdkcli_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:15:52.046 + base_dir=/var/tmp/ceph 00:15:52.046 + image=/var/tmp/ceph/ceph_raw.img 00:15:52.046 + dev=/dev/loop200 00:15:52.046 + pkill -9 ceph 00:15:52.046 + sleep 3 00:15:55.328 + umount /dev/loop200p2 00:15:55.328 umount: /dev/loop200p2: no mount point specified. 00:15:55.328 + losetup -d /dev/loop200 00:15:55.328 losetup: /dev/loop200: detach failed: No such device or address 00:15:55.328 + rm -rf /var/tmp/ceph 00:15:55.328 10:16:28 spdkcli_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:15:55.328 10:16:28 spdkcli_rbd -- spdkcli/rbd.sh@21 -- # rbd_setup 127.0.0.1 00:15:55.328 10:16:28 spdkcli_rbd -- common/autotest_common.sh@1005 -- # '[' -z 127.0.0.1 ']' 00:15:55.328 10:16:28 spdkcli_rbd -- common/autotest_common.sh@1009 -- # '[' -n '' ']' 00:15:55.328 10:16:28 spdkcli_rbd -- common/autotest_common.sh@1018 -- # hash ceph 00:15:55.328 10:16:28 spdkcli_rbd -- common/autotest_common.sh@1019 -- # export PG_NUM=128 00:15:55.328 10:16:28 spdkcli_rbd -- common/autotest_common.sh@1019 -- # PG_NUM=128 00:15:55.328 10:16:28 spdkcli_rbd -- common/autotest_common.sh@1020 -- # export RBD_POOL=rbd 00:15:55.328 10:16:28 spdkcli_rbd -- common/autotest_common.sh@1020 -- # RBD_POOL=rbd 00:15:55.328 10:16:28 spdkcli_rbd -- common/autotest_common.sh@1021 -- # export RBD_NAME=foo 00:15:55.328 10:16:28 spdkcli_rbd -- common/autotest_common.sh@1021 -- # RBD_NAME=foo 00:15:55.328 10:16:28 spdkcli_rbd -- common/autotest_common.sh@1022 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:15:55.328 + base_dir=/var/tmp/ceph 00:15:55.328 + image=/var/tmp/ceph/ceph_raw.img 00:15:55.328 + dev=/dev/loop200 00:15:55.328 + pkill -9 ceph 00:15:55.328 + sleep 3 00:15:58.625 + umount /dev/loop200p2 00:15:58.625 umount: /dev/loop200p2: no mount point specified. 00:15:58.625 + losetup -d /dev/loop200 00:15:58.625 losetup: /dev/loop200: detach failed: No such device or address 00:15:58.625 + rm -rf /var/tmp/ceph 00:15:58.625 10:16:31 spdkcli_rbd -- common/autotest_common.sh@1023 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 127.0.0.1 00:15:58.625 + set -e 00:15:58.625 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:15:58.625 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:15:58.625 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:15:58.625 + base_dir=/var/tmp/ceph 00:15:58.625 + mon_ip=127.0.0.1 00:15:58.625 + mon_dir=/var/tmp/ceph/mon.a 00:15:58.625 + pid_dir=/var/tmp/ceph/pid 00:15:58.625 + ceph_conf=/var/tmp/ceph/ceph.conf 00:15:58.625 + mnt_dir=/var/tmp/ceph/mnt 00:15:58.625 + image=/var/tmp/ceph_raw.img 00:15:58.625 + dev=/dev/loop200 00:15:58.625 + modprobe loop 00:15:58.625 + umount /dev/loop200p2 00:15:58.625 umount: /dev/loop200p2: no mount point specified. 00:15:58.625 + true 00:15:58.625 + losetup -d /dev/loop200 00:15:58.625 losetup: /dev/loop200: detach failed: No such device or address 00:15:58.625 + true 00:15:58.625 + '[' -d /var/tmp/ceph ']' 00:15:58.625 + mkdir /var/tmp/ceph 00:15:58.625 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:15:58.625 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:15:58.625 + fallocate -l 4G /var/tmp/ceph_raw.img 00:15:58.625 + mknod /dev/loop200 b 7 200 00:15:58.625 mknod: /dev/loop200: File exists 00:15:58.625 + true 00:15:58.625 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:15:58.625 Partitioning /dev/loop200 00:15:58.625 + PARTED='parted -s' 00:15:58.625 + SGDISK=sgdisk 00:15:58.625 + echo 'Partitioning /dev/loop200' 00:15:58.625 + parted -s /dev/loop200 mktable gpt 00:15:58.625 + sleep 2 00:16:00.525 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:16:00.525 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:16:00.525 Setting name on /dev/loop200 00:16:00.525 + partno=0 00:16:00.525 + echo 'Setting name on /dev/loop200' 00:16:00.525 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:16:01.913 Warning: The kernel is still using the old partition table. 00:16:01.913 The new table will be used at the next reboot or after you 00:16:01.913 run partprobe(8) or kpartx(8) 00:16:01.913 The operation has completed successfully. 00:16:01.913 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:16:02.853 Warning: The kernel is still using the old partition table. 00:16:02.853 The new table will be used at the next reboot or after you 00:16:02.853 run partprobe(8) or kpartx(8) 00:16:02.853 The operation has completed successfully. 00:16:02.853 + kpartx /dev/loop200 00:16:02.853 loop200p1 : 0 4192256 /dev/loop200 2048 00:16:02.853 loop200p2 : 0 4192256 /dev/loop200 4194304 00:16:02.853 ++ ceph -v 00:16:02.853 ++ awk '{print $3}' 00:16:02.853 + ceph_version=17.2.7 00:16:02.853 + ceph_maj=17 00:16:02.853 + '[' 17 -gt 12 ']' 00:16:02.853 + update_config=true 00:16:02.853 + rm -f /var/log/ceph/ceph-mon.a.log 00:16:02.853 + set_min_mon_release='--set-min-mon-release 14' 00:16:02.853 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:16:02.853 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:16:02.853 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:16:02.853 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:16:02.853 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:16:02.853 = sectsz=512 attr=2, projid32bit=1 00:16:02.853 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:02.853 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:02.853 data = bsize=4096 blocks=524032, imaxpct=25 00:16:02.853 = sunit=0 swidth=0 blks 00:16:02.853 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:02.853 log =internal log bsize=4096 blocks=16384, version=2 00:16:02.853 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:02.853 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:02.853 Discarding blocks...Done. 00:16:02.853 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:16:02.853 + cat 00:16:02.853 + rm -rf '/var/tmp/ceph/mon.a/*' 00:16:02.853 + mkdir -p /var/tmp/ceph/mon.a 00:16:02.853 + mkdir -p /var/tmp/ceph/pid 00:16:02.853 + rm -f /etc/ceph/ceph.client.admin.keyring 00:16:02.853 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:16:02.853 creating /var/tmp/ceph/keyring 00:16:02.853 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:16:02.853 + monmaptool --create --clobber --add a 127.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:16:02.853 monmaptool: monmap file /var/tmp/ceph/monmap 00:16:02.853 monmaptool: generated fsid ea006eac-a4e1-4239-a304-90563aa6dee9 00:16:02.853 setting min_mon_release = octopus 00:16:02.853 epoch 0 00:16:02.853 fsid ea006eac-a4e1-4239-a304-90563aa6dee9 00:16:02.853 last_changed 2024-07-25T10:16:36.083331+0000 00:16:02.853 created 2024-07-25T10:16:36.083331+0000 00:16:02.853 min_mon_release 15 (octopus) 00:16:02.853 election_strategy: 1 00:16:02.853 0: v2:127.0.0.1:12046/0 mon.a 00:16:02.853 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:16:02.853 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:16:03.111 + '[' true = true ']' 00:16:03.111 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:16:03.111 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:16:03.111 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:16:03.111 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:16:03.111 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:16:03.111 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:16:03.111 ++ hostname 00:16:03.111 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:16:03.111 + true 00:16:03.111 + '[' true = true ']' 00:16:03.111 + ceph-conf --name mon.a --show-config-value log_file 00:16:03.111 /var/log/ceph/ceph-mon.a.log 00:16:03.111 ++ ceph -s 00:16:03.111 ++ grep id 00:16:03.111 ++ awk '{print $2}' 00:16:03.369 + fsid=ea006eac-a4e1-4239-a304-90563aa6dee9 00:16:03.369 + sed -i 's/perf = true/perf = true\n\tfsid = ea006eac-a4e1-4239-a304-90563aa6dee9 \n/g' /var/tmp/ceph/ceph.conf 00:16:03.369 + (( ceph_maj < 18 )) 00:16:03.369 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:16:03.369 + cat /var/tmp/ceph/ceph.conf 00:16:03.369 [global] 00:16:03.369 debug_lockdep = 0/0 00:16:03.369 debug_context = 0/0 00:16:03.369 debug_crush = 0/0 00:16:03.369 debug_buffer = 0/0 00:16:03.369 debug_timer = 0/0 00:16:03.369 debug_filer = 0/0 00:16:03.369 debug_objecter = 0/0 00:16:03.369 debug_rados = 0/0 00:16:03.369 debug_rbd = 0/0 00:16:03.369 debug_ms = 0/0 00:16:03.369 debug_monc = 0/0 00:16:03.369 debug_tp = 0/0 00:16:03.369 debug_auth = 0/0 00:16:03.369 debug_finisher = 0/0 00:16:03.369 debug_heartbeatmap = 0/0 00:16:03.369 debug_perfcounter = 0/0 00:16:03.369 debug_asok = 0/0 00:16:03.369 debug_throttle = 0/0 00:16:03.369 debug_mon = 0/0 00:16:03.369 debug_paxos = 0/0 00:16:03.369 debug_rgw = 0/0 00:16:03.369 00:16:03.369 perf = true 00:16:03.369 osd objectstore = filestore 00:16:03.369 00:16:03.369 fsid = ea006eac-a4e1-4239-a304-90563aa6dee9 00:16:03.369 00:16:03.369 mutex_perf_counter = false 00:16:03.369 throttler_perf_counter = false 00:16:03.369 rbd cache = false 00:16:03.369 mon_allow_pool_delete = true 00:16:03.369 00:16:03.369 osd_pool_default_size = 1 00:16:03.369 00:16:03.369 [mon] 00:16:03.369 mon_max_pool_pg_num=166496 00:16:03.369 mon_osd_max_split_count = 10000 00:16:03.369 mon_pg_warn_max_per_osd = 10000 00:16:03.369 00:16:03.369 [osd] 00:16:03.369 osd_op_threads = 64 00:16:03.369 filestore_queue_max_ops=5000 00:16:03.369 filestore_queue_committing_max_ops=5000 00:16:03.369 journal_max_write_entries=1000 00:16:03.369 journal_queue_max_ops=3000 00:16:03.369 objecter_inflight_ops=102400 00:16:03.369 filestore_wbthrottle_enable=false 00:16:03.369 filestore_queue_max_bytes=1048576000 00:16:03.369 filestore_queue_committing_max_bytes=1048576000 00:16:03.369 journal_max_write_bytes=1048576000 00:16:03.369 journal_queue_max_bytes=1048576000 00:16:03.369 ms_dispatch_throttle_bytes=1048576000 00:16:03.369 objecter_inflight_op_bytes=1048576000 00:16:03.369 filestore_max_sync_interval=10 00:16:03.369 osd_client_message_size_cap = 0 00:16:03.369 osd_client_message_cap = 0 00:16:03.369 osd_enable_op_tracker = false 00:16:03.369 filestore_fd_cache_size = 10240 00:16:03.369 filestore_fd_cache_shards = 64 00:16:03.369 filestore_op_threads = 16 00:16:03.369 osd_op_num_shards = 48 00:16:03.369 osd_op_num_threads_per_shard = 2 00:16:03.369 osd_pg_object_context_cache_count = 10240 00:16:03.369 filestore_odsync_write = True 00:16:03.369 journal_dynamic_throttle = True 00:16:03.369 00:16:03.369 [osd.0] 00:16:03.369 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:16:03.369 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:16:03.369 00:16:03.369 # add mon address 00:16:03.369 [mon.a] 00:16:03.369 mon addr = v2:127.0.0.1:12046 00:16:03.369 + i=0 00:16:03.369 + mkdir -p /var/tmp/ceph/mnt 00:16:03.369 ++ uuidgen 00:16:03.369 + uuid=19a83474-00d0-4bf5-b14b-0b9fe64c316d 00:16:03.369 + ceph -c /var/tmp/ceph/ceph.conf osd create 19a83474-00d0-4bf5-b14b-0b9fe64c316d 0 00:16:03.626 0 00:16:03.626 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid 19a83474-00d0-4bf5-b14b-0b9fe64c316d --check-needs-journal --no-mon-config 00:16:03.883 2024-07-25T10:16:36.889+0000 7f5d6ec6c400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:16:03.883 2024-07-25T10:16:36.890+0000 7f5d6ec6c400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:16:03.883 2024-07-25T10:16:36.927+0000 7f5d6ec6c400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 19a83474-00d0-4bf5-b14b-0b9fe64c316d, invalid (someone else's?) journal 00:16:03.883 2024-07-25T10:16:36.953+0000 7f5d6ec6c400 -1 journal do_read_entry(4096): bad header magic 00:16:03.883 2024-07-25T10:16:36.953+0000 7f5d6ec6c400 -1 journal do_read_entry(4096): bad header magic 00:16:03.883 ++ hostname 00:16:03.883 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:16:05.254 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:16:05.254 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:16:05.512 added key for osd.0 00:16:05.512 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:16:05.795 + class_dir=/lib64/rados-classes 00:16:05.795 + [[ -e /lib64/rados-classes ]] 00:16:05.795 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:16:06.052 + pkill -9 ceph-osd 00:16:06.052 + true 00:16:06.052 + sleep 2 00:16:08.579 + mkdir -p /var/tmp/ceph/pid 00:16:08.579 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:16:08.579 2024-07-25T10:16:41.282+0000 7f96ca250400 -1 Falling back to public interface 00:16:08.579 2024-07-25T10:16:41.332+0000 7f96ca250400 -1 journal do_read_entry(8192): bad header magic 00:16:08.579 2024-07-25T10:16:41.332+0000 7f96ca250400 -1 journal do_read_entry(8192): bad header magic 00:16:08.579 2024-07-25T10:16:41.339+0000 7f96ca250400 -1 osd.0 0 log_to_monitors true 00:16:09.143 10:16:42 spdkcli_rbd -- common/autotest_common.sh@1025 -- # ceph osd pool create rbd 128 00:16:10.076 pool 'rbd' created 00:16:10.076 10:16:43 spdkcli_rbd -- common/autotest_common.sh@1026 -- # rbd create foo --size 1000 00:16:13.383 10:16:46 spdkcli_rbd -- spdkcli/rbd.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py '"/bdevs/rbd create rbd foo 512'\'' '\''Ceph0'\'' True "/bdevs/rbd' create rbd foo 512 Ceph1 'True 00:16:13.383 timing_exit spdkcli_create_rbd_config 00:16:13.383 00:16:13.383 timing_enter spdkcli_check_match 00:16:13.383 check_match 00:16:13.383 timing_exit spdkcli_check_match 00:16:13.383 00:16:13.383 timing_enter spdkcli_clear_rbd_config 00:16:13.383 /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py "/bdevs/rbd' delete Ceph0 Ceph0 '"/bdevs/rbd delete_all'\'' '\''Ceph1'\'' ' 00:16:13.949 Executing command: [' ', True] 00:16:13.949 10:16:47 spdkcli_rbd -- spdkcli/rbd.sh@31 -- # rbd_cleanup 00:16:13.949 10:16:47 spdkcli_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:16:13.949 10:16:47 spdkcli_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:16:13.949 + base_dir=/var/tmp/ceph 00:16:13.949 + image=/var/tmp/ceph/ceph_raw.img 00:16:13.949 + dev=/dev/loop200 00:16:13.949 + pkill -9 ceph 00:16:13.949 + sleep 3 00:16:17.233 + umount /dev/loop200p2 00:16:17.233 + losetup -d /dev/loop200 00:16:17.233 + rm -rf /var/tmp/ceph 00:16:17.233 10:16:50 spdkcli_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:16:17.233 10:16:50 spdkcli_rbd -- spdkcli/rbd.sh@32 -- # timing_exit spdkcli_clear_rbd_config 00:16:17.233 10:16:50 spdkcli_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:17.233 10:16:50 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:17.233 10:16:50 spdkcli_rbd -- spdkcli/rbd.sh@34 -- # killprocess 77771 00:16:17.233 10:16:50 spdkcli_rbd -- common/autotest_common.sh@948 -- # '[' -z 77771 ']' 00:16:17.233 10:16:50 spdkcli_rbd -- common/autotest_common.sh@952 -- # kill -0 77771 00:16:17.233 10:16:50 spdkcli_rbd -- common/autotest_common.sh@953 -- # uname 00:16:17.233 10:16:50 spdkcli_rbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:17.233 10:16:50 spdkcli_rbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77771 00:16:17.233 10:16:50 spdkcli_rbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:17.233 10:16:50 spdkcli_rbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:17.233 10:16:50 spdkcli_rbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77771' 00:16:17.233 killing process with pid 77771 00:16:17.233 10:16:50 spdkcli_rbd -- common/autotest_common.sh@967 -- # kill 77771 00:16:17.233 10:16:50 spdkcli_rbd -- common/autotest_common.sh@972 -- # wait 77771 00:16:17.491 10:16:50 spdkcli_rbd -- spdkcli/rbd.sh@1 -- # rbd_cleanup 00:16:17.491 10:16:50 spdkcli_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:16:17.491 10:16:50 spdkcli_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:16:17.491 + base_dir=/var/tmp/ceph 00:16:17.491 + image=/var/tmp/ceph/ceph_raw.img 00:16:17.491 + dev=/dev/loop200 00:16:17.491 + pkill -9 ceph 00:16:17.491 + sleep 3 00:16:20.773 + umount /dev/loop200p2 00:16:20.773 umount: /dev/loop200p2: no mount point specified. 00:16:20.773 + losetup -d /dev/loop200 00:16:20.773 losetup: /dev/loop200: detach failed: No such device or address 00:16:20.773 + rm -rf /var/tmp/ceph 00:16:20.773 10:16:53 spdkcli_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:16:20.773 10:16:53 spdkcli_rbd -- spdkcli/rbd.sh@1 -- # cleanup 00:16:20.774 10:16:53 spdkcli_rbd -- spdkcli/common.sh@10 -- # '[' -n 77771 ']' 00:16:20.774 10:16:53 spdkcli_rbd -- spdkcli/common.sh@11 -- # killprocess 77771 00:16:20.774 10:16:53 spdkcli_rbd -- common/autotest_common.sh@948 -- # '[' -z 77771 ']' 00:16:20.774 10:16:53 spdkcli_rbd -- common/autotest_common.sh@952 -- # kill -0 77771 00:16:20.774 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (77771) - No such process 00:16:20.774 Process with pid 77771 is not found 00:16:20.774 10:16:53 spdkcli_rbd -- common/autotest_common.sh@975 -- # echo 'Process with pid 77771 is not found' 00:16:20.774 10:16:53 spdkcli_rbd -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:16:20.774 10:16:53 spdkcli_rbd -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:16:20.774 10:16:53 spdkcli_rbd -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:16:20.774 10:16:53 spdkcli_rbd -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_rbd.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:16:20.774 ************************************ 00:16:20.774 END TEST spdkcli_rbd 00:16:20.774 ************************************ 00:16:20.774 00:16:20.774 real 0m29.615s 00:16:20.774 user 0m54.837s 00:16:20.774 sys 0m1.561s 00:16:20.774 10:16:53 spdkcli_rbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:20.774 10:16:53 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:20.774 10:16:53 -- common/autotest_common.sh@1142 -- # return 0 00:16:20.774 10:16:53 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:16:20.774 10:16:53 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:16:20.774 10:16:53 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:16:20.774 10:16:53 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:16:20.774 10:16:53 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:16:20.774 10:16:53 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:16:20.774 10:16:53 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:16:20.774 10:16:53 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:16:20.774 10:16:53 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:16:20.774 10:16:53 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:16:20.774 10:16:53 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:16:20.774 10:16:53 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:16:20.774 10:16:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:20.774 10:16:53 -- common/autotest_common.sh@10 -- # set +x 00:16:20.774 10:16:53 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:16:20.774 10:16:53 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:16:20.774 10:16:53 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:16:20.774 10:16:53 -- common/autotest_common.sh@10 -- # set +x 00:16:22.676 INFO: APP EXITING 00:16:22.676 INFO: killing all VMs 00:16:22.676 INFO: killing vhost app 00:16:22.676 INFO: EXIT DONE 00:16:22.933 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:22.933 Waiting for block devices as requested 00:16:22.933 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:23.191 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:23.755 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:24.013 Cleaning 00:16:24.013 Removing: /var/run/dpdk/spdk0/config 00:16:24.013 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:16:24.013 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:16:24.013 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:16:24.013 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:16:24.013 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:16:24.013 Removing: /var/run/dpdk/spdk0/hugepage_info 00:16:24.013 Removing: /var/run/dpdk/spdk1/config 00:16:24.013 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:16:24.013 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:16:24.013 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:16:24.013 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:16:24.013 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:16:24.013 Removing: /var/run/dpdk/spdk1/hugepage_info 00:16:24.013 Removing: /dev/shm/iscsi_trace.pid68722 00:16:24.013 Removing: /dev/shm/spdk_tgt_trace.pid58850 00:16:24.013 Removing: /var/run/dpdk/spdk0 00:16:24.013 Removing: /var/run/dpdk/spdk1 00:16:24.013 Removing: /var/run/dpdk/spdk_pid58705 00:16:24.013 Removing: /var/run/dpdk/spdk_pid58850 00:16:24.013 Removing: /var/run/dpdk/spdk_pid59042 00:16:24.013 Removing: /var/run/dpdk/spdk_pid59129 00:16:24.013 Removing: /var/run/dpdk/spdk_pid59151 00:16:24.013 Removing: /var/run/dpdk/spdk_pid59266 00:16:24.013 Removing: /var/run/dpdk/spdk_pid59284 00:16:24.013 Removing: /var/run/dpdk/spdk_pid59402 00:16:24.013 Removing: /var/run/dpdk/spdk_pid59578 00:16:24.013 Removing: /var/run/dpdk/spdk_pid59759 00:16:24.013 Removing: /var/run/dpdk/spdk_pid59823 00:16:24.013 Removing: /var/run/dpdk/spdk_pid59899 00:16:24.013 Removing: /var/run/dpdk/spdk_pid59985 00:16:24.013 Removing: /var/run/dpdk/spdk_pid60056 00:16:24.013 Removing: /var/run/dpdk/spdk_pid60099 00:16:24.013 Removing: /var/run/dpdk/spdk_pid60133 00:16:24.013 Removing: /var/run/dpdk/spdk_pid60190 00:16:24.013 Removing: /var/run/dpdk/spdk_pid60290 00:16:24.013 Removing: /var/run/dpdk/spdk_pid60704 00:16:24.013 Removing: /var/run/dpdk/spdk_pid60756 00:16:24.013 Removing: /var/run/dpdk/spdk_pid60807 00:16:24.013 Removing: /var/run/dpdk/spdk_pid60823 00:16:24.013 Removing: /var/run/dpdk/spdk_pid60879 00:16:24.013 Removing: /var/run/dpdk/spdk_pid60895 00:16:24.013 Removing: /var/run/dpdk/spdk_pid60962 00:16:24.013 Removing: /var/run/dpdk/spdk_pid60978 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61029 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61047 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61087 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61105 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61222 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61258 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61332 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61385 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61409 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61468 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61502 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61537 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61566 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61606 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61637 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61671 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61706 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61739 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61775 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61804 00:16:24.013 Removing: /var/run/dpdk/spdk_pid61844 00:16:24.272 Removing: /var/run/dpdk/spdk_pid61873 00:16:24.272 Removing: /var/run/dpdk/spdk_pid61914 00:16:24.272 Removing: /var/run/dpdk/spdk_pid61943 00:16:24.272 Removing: /var/run/dpdk/spdk_pid61985 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62014 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62052 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62089 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62124 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62159 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62224 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62315 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62638 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62662 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62681 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62725 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62735 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62758 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62774 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62783 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62833 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62847 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62892 00:16:24.272 Removing: /var/run/dpdk/spdk_pid62984 00:16:24.272 Removing: /var/run/dpdk/spdk_pid63734 00:16:24.272 Removing: /var/run/dpdk/spdk_pid64173 00:16:24.272 Removing: /var/run/dpdk/spdk_pid64444 00:16:24.272 Removing: /var/run/dpdk/spdk_pid64741 00:16:24.272 Removing: /var/run/dpdk/spdk_pid64979 00:16:24.272 Removing: /var/run/dpdk/spdk_pid65526 00:16:24.272 Removing: /var/run/dpdk/spdk_pid66936 00:16:24.272 Removing: /var/run/dpdk/spdk_pid67626 00:16:24.272 Removing: /var/run/dpdk/spdk_pid68385 00:16:24.272 Removing: /var/run/dpdk/spdk_pid68418 00:16:24.272 Removing: /var/run/dpdk/spdk_pid68722 00:16:24.272 Removing: /var/run/dpdk/spdk_pid69974 00:16:24.272 Removing: /var/run/dpdk/spdk_pid70351 00:16:24.272 Removing: /var/run/dpdk/spdk_pid70397 00:16:24.272 Removing: /var/run/dpdk/spdk_pid70787 00:16:24.272 Removing: /var/run/dpdk/spdk_pid74064 00:16:24.272 Removing: /var/run/dpdk/spdk_pid74365 00:16:24.272 Removing: /var/run/dpdk/spdk_pid74409 00:16:24.272 Removing: /var/run/dpdk/spdk_pid74481 00:16:24.272 Removing: /var/run/dpdk/spdk_pid74546 00:16:24.272 Removing: /var/run/dpdk/spdk_pid74605 00:16:24.272 Removing: /var/run/dpdk/spdk_pid74771 00:16:24.272 Removing: /var/run/dpdk/spdk_pid74821 00:16:24.272 Removing: /var/run/dpdk/spdk_pid74836 00:16:24.272 Removing: /var/run/dpdk/spdk_pid74862 00:16:24.272 Removing: /var/run/dpdk/spdk_pid74877 00:16:24.272 Removing: /var/run/dpdk/spdk_pid74956 00:16:24.272 Removing: /var/run/dpdk/spdk_pid74999 00:16:24.272 Removing: /var/run/dpdk/spdk_pid75209 00:16:24.272 Removing: /var/run/dpdk/spdk_pid75512 00:16:24.272 Removing: /var/run/dpdk/spdk_pid75755 00:16:24.272 Removing: /var/run/dpdk/spdk_pid76628 00:16:24.272 Removing: /var/run/dpdk/spdk_pid76677 00:16:24.272 Removing: /var/run/dpdk/spdk_pid76967 00:16:24.272 Removing: /var/run/dpdk/spdk_pid77149 00:16:24.272 Removing: /var/run/dpdk/spdk_pid77321 00:16:24.272 Removing: /var/run/dpdk/spdk_pid77476 00:16:24.272 Removing: /var/run/dpdk/spdk_pid77576 00:16:24.272 Removing: /var/run/dpdk/spdk_pid77636 00:16:24.272 Removing: /var/run/dpdk/spdk_pid77659 00:16:24.272 Removing: /var/run/dpdk/spdk_pid77771 00:16:24.272 Clean 00:16:24.272 10:16:57 -- common/autotest_common.sh@1451 -- # return 0 00:16:24.272 10:16:57 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:16:24.272 10:16:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:24.272 10:16:57 -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 10:16:57 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:16:24.531 10:16:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:24.531 10:16:57 -- common/autotest_common.sh@10 -- # set +x 00:16:24.531 10:16:57 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:16:24.531 10:16:57 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:16:24.531 10:16:57 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:16:24.531 10:16:57 -- spdk/autotest.sh@391 -- # hash lcov 00:16:24.531 10:16:57 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:16:24.531 10:16:57 -- spdk/autotest.sh@393 -- # hostname 00:16:24.531 10:16:57 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:16:24.789 geninfo: WARNING: invalid characters removed from testname! 00:16:56.867 10:17:25 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:56.867 10:17:29 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:58.767 10:17:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:01.333 10:17:34 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:03.885 10:17:36 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:05.784 10:17:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:08.312 10:17:41 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:17:08.312 10:17:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:08.312 10:17:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:17:08.312 10:17:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.312 10:17:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.312 10:17:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.312 10:17:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.312 10:17:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.312 10:17:41 -- paths/export.sh@5 -- $ export PATH 00:17:08.312 10:17:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.312 10:17:41 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:17:08.312 10:17:41 -- common/autobuild_common.sh@447 -- $ date +%s 00:17:08.312 10:17:41 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721902661.XXXXXX 00:17:08.569 10:17:41 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721902661.c6Luex 00:17:08.569 10:17:41 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:17:08.569 10:17:41 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:17:08.569 10:17:41 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:17:08.569 10:17:41 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:17:08.569 10:17:41 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:17:08.569 10:17:41 -- common/autobuild_common.sh@463 -- $ get_config_params 00:17:08.569 10:17:41 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:17:08.569 10:17:41 -- common/autotest_common.sh@10 -- $ set +x 00:17:08.569 10:17:41 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-coverage --with-ublk' 00:17:08.569 10:17:41 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:17:08.569 10:17:41 -- pm/common@17 -- $ local monitor 00:17:08.569 10:17:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:08.569 10:17:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:08.569 10:17:41 -- pm/common@21 -- $ date +%s 00:17:08.569 10:17:41 -- pm/common@25 -- $ sleep 1 00:17:08.569 10:17:41 -- pm/common@21 -- $ date +%s 00:17:08.569 10:17:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721902661 00:17:08.569 10:17:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721902661 00:17:08.569 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721902661_collect-vmstat.pm.log 00:17:08.569 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721902661_collect-cpu-load.pm.log 00:17:09.501 10:17:42 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:17:09.501 10:17:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:17:09.501 10:17:42 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:17:09.501 10:17:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:17:09.501 10:17:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:17:09.501 10:17:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:17:09.501 10:17:42 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:17:09.501 10:17:42 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:17:09.501 10:17:42 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:09.501 10:17:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:17:09.501 10:17:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:17:09.501 10:17:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:17:09.501 10:17:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:17:09.501 10:17:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:09.501 10:17:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:17:09.501 10:17:42 -- pm/common@44 -- $ pid=80392 00:17:09.501 10:17:42 -- pm/common@50 -- $ kill -TERM 80392 00:17:09.501 10:17:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:09.501 10:17:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:17:09.501 10:17:42 -- pm/common@44 -- $ pid=80394 00:17:09.501 10:17:42 -- pm/common@50 -- $ kill -TERM 80394 00:17:09.501 + [[ -n 5172 ]] 00:17:09.501 + sudo kill 5172 00:17:09.511 [Pipeline] } 00:17:09.539 [Pipeline] // timeout 00:17:09.545 [Pipeline] } 00:17:09.564 [Pipeline] // stage 00:17:09.569 [Pipeline] } 00:17:09.584 [Pipeline] // catchError 00:17:09.593 [Pipeline] stage 00:17:09.595 [Pipeline] { (Stop VM) 00:17:09.611 [Pipeline] sh 00:17:09.888 + vagrant halt 00:17:13.171 ==> default: Halting domain... 00:17:19.834 [Pipeline] sh 00:17:20.109 + vagrant destroy -f 00:17:24.294 ==> default: Removing domain... 00:17:24.307 [Pipeline] sh 00:17:24.586 + mv output /var/jenkins/workspace/iscsi-vg-autotest/output 00:17:24.595 [Pipeline] } 00:17:24.613 [Pipeline] // stage 00:17:24.618 [Pipeline] } 00:17:24.635 [Pipeline] // dir 00:17:24.640 [Pipeline] } 00:17:24.658 [Pipeline] // wrap 00:17:24.664 [Pipeline] } 00:17:24.679 [Pipeline] // catchError 00:17:24.689 [Pipeline] stage 00:17:24.691 [Pipeline] { (Epilogue) 00:17:24.706 [Pipeline] sh 00:17:24.987 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:17:31.574 [Pipeline] catchError 00:17:31.575 [Pipeline] { 00:17:31.589 [Pipeline] sh 00:17:31.868 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:17:32.126 Artifacts sizes are good 00:17:32.134 [Pipeline] } 00:17:32.151 [Pipeline] // catchError 00:17:32.162 [Pipeline] archiveArtifacts 00:17:32.169 Archiving artifacts 00:17:33.543 [Pipeline] cleanWs 00:17:33.555 [WS-CLEANUP] Deleting project workspace... 00:17:33.555 [WS-CLEANUP] Deferred wipeout is used... 00:17:33.562 [WS-CLEANUP] done 00:17:33.563 [Pipeline] } 00:17:33.579 [Pipeline] // stage 00:17:33.585 [Pipeline] } 00:17:33.602 [Pipeline] // node 00:17:33.607 [Pipeline] End of Pipeline 00:17:33.643 Finished: SUCCESS