00:00:00.001 Started by upstream project "autotest-per-patch" build number 127191 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 24328 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.049 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/iscsi-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.049 The recommended git tool is: git 00:00:00.049 using credential 00000000-0000-0000-0000-000000000002 00:00:00.052 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/iscsi-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.079 Fetching changes from the remote Git repository 00:00:00.082 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.127 Using shallow fetch with depth 1 00:00:00.127 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.127 > git --version # timeout=10 00:00:00.180 > git --version # 'git version 2.39.2' 00:00:00.180 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.216 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.216 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/41/22241/26 # timeout=5 00:00:04.282 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.294 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.305 Checking out Revision 124d5bb683991a063807d96399433650600a89c8 (FETCH_HEAD) 00:00:04.305 > git config core.sparsecheckout # timeout=10 00:00:04.316 > git read-tree -mu HEAD # timeout=10 00:00:04.333 > git checkout -f 124d5bb683991a063807d96399433650600a89c8 # timeout=5 00:00:04.353 Commit message: "jenkins/jjb-config: Add release-build jobs to per-patch and nightly" 00:00:04.353 > git rev-list --no-walk 8bbbe8e4d16191ed73088cea52fca0d797923136 # timeout=10 00:00:04.455 [Pipeline] Start of Pipeline 00:00:04.469 [Pipeline] library 00:00:04.470 Loading library shm_lib@master 00:00:04.470 Library shm_lib@master is cached. Copying from home. 00:00:04.486 [Pipeline] node 00:00:04.495 Running on VM-host-WFP1 in /var/jenkins/workspace/iscsi-vg-autotest_2 00:00:04.496 [Pipeline] { 00:00:04.506 [Pipeline] catchError 00:00:04.507 [Pipeline] { 00:00:04.517 [Pipeline] wrap 00:00:04.525 [Pipeline] { 00:00:04.533 [Pipeline] stage 00:00:04.535 [Pipeline] { (Prologue) 00:00:04.554 [Pipeline] echo 00:00:04.556 Node: VM-host-WFP1 00:00:04.562 [Pipeline] cleanWs 00:00:04.572 [WS-CLEANUP] Deleting project workspace... 00:00:04.572 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.578 [WS-CLEANUP] done 00:00:04.742 [Pipeline] setCustomBuildProperty 00:00:04.824 [Pipeline] httpRequest 00:00:04.864 [Pipeline] echo 00:00:04.866 Sorcerer 10.211.164.101 is alive 00:00:04.875 [Pipeline] httpRequest 00:00:04.880 HttpMethod: GET 00:00:04.880 URL: http://10.211.164.101/packages/jbp_124d5bb683991a063807d96399433650600a89c8.tar.gz 00:00:04.881 Sending request to url: http://10.211.164.101/packages/jbp_124d5bb683991a063807d96399433650600a89c8.tar.gz 00:00:04.902 Response Code: HTTP/1.1 200 OK 00:00:04.902 Success: Status code 200 is in the accepted range: 200,404 00:00:04.903 Saving response body to /var/jenkins/workspace/iscsi-vg-autotest_2/jbp_124d5bb683991a063807d96399433650600a89c8.tar.gz 00:00:25.010 [Pipeline] sh 00:00:25.329 + tar --no-same-owner -xf jbp_124d5bb683991a063807d96399433650600a89c8.tar.gz 00:00:25.370 [Pipeline] httpRequest 00:00:25.388 [Pipeline] echo 00:00:25.390 Sorcerer 10.211.164.101 is alive 00:00:25.399 [Pipeline] httpRequest 00:00:25.404 HttpMethod: GET 00:00:25.405 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:25.405 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:25.408 Response Code: HTTP/1.1 200 OK 00:00:25.409 Success: Status code 200 is in the accepted range: 200,404 00:00:25.410 Saving response body to /var/jenkins/workspace/iscsi-vg-autotest_2/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:02:56.536 [Pipeline] sh 00:02:56.819 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:02:59.361 [Pipeline] sh 00:02:59.642 + git -C spdk log --oneline -n5 00:02:59.642 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:02:59.642 fc2398dfa raid: clear base bdev configure_cb after executing 00:02:59.642 5558f3f50 raid: complete bdev_raid_create after sb is written 00:02:59.642 d005e023b raid: fix empty slot not updated in sb after resize 00:02:59.642 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:02:59.661 [Pipeline] writeFile 00:02:59.679 [Pipeline] sh 00:02:59.967 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:00.079 [Pipeline] sh 00:03:00.360 + cat autorun-spdk.conf 00:03:00.360 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:00.360 SPDK_TEST_ISCSI_INITIATOR=1 00:03:00.360 SPDK_TEST_ISCSI=1 00:03:00.360 SPDK_TEST_RBD=1 00:03:00.360 SPDK_RUN_UBSAN=1 00:03:00.360 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:00.366 RUN_NIGHTLY=0 00:03:00.368 [Pipeline] } 00:03:00.384 [Pipeline] // stage 00:03:00.397 [Pipeline] stage 00:03:00.399 [Pipeline] { (Run VM) 00:03:00.413 [Pipeline] sh 00:03:00.692 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:00.692 + echo 'Start stage prepare_nvme.sh' 00:03:00.692 Start stage prepare_nvme.sh 00:03:00.692 + [[ -n 7 ]] 00:03:00.692 + disk_prefix=ex7 00:03:00.692 + [[ -n /var/jenkins/workspace/iscsi-vg-autotest_2 ]] 00:03:00.692 + [[ -e /var/jenkins/workspace/iscsi-vg-autotest_2/autorun-spdk.conf ]] 00:03:00.692 + source /var/jenkins/workspace/iscsi-vg-autotest_2/autorun-spdk.conf 00:03:00.692 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:00.692 ++ SPDK_TEST_ISCSI_INITIATOR=1 00:03:00.692 ++ SPDK_TEST_ISCSI=1 00:03:00.692 ++ SPDK_TEST_RBD=1 00:03:00.692 ++ SPDK_RUN_UBSAN=1 00:03:00.692 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:00.692 ++ RUN_NIGHTLY=0 00:03:00.692 + cd /var/jenkins/workspace/iscsi-vg-autotest_2 00:03:00.692 + nvme_files=() 00:03:00.692 + declare -A nvme_files 00:03:00.692 + backend_dir=/var/lib/libvirt/images/backends 00:03:00.692 + nvme_files['nvme.img']=5G 00:03:00.692 + nvme_files['nvme-cmb.img']=5G 00:03:00.692 + nvme_files['nvme-multi0.img']=4G 00:03:00.692 + nvme_files['nvme-multi1.img']=4G 00:03:00.692 + nvme_files['nvme-multi2.img']=4G 00:03:00.692 + nvme_files['nvme-openstack.img']=8G 00:03:00.692 + nvme_files['nvme-zns.img']=5G 00:03:00.692 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:00.692 + (( SPDK_TEST_FTL == 1 )) 00:03:00.692 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:00.692 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:00.692 + for nvme in "${!nvme_files[@]}" 00:03:00.692 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:03:00.692 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:00.692 + for nvme in "${!nvme_files[@]}" 00:03:00.692 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:03:00.692 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:00.692 + for nvme in "${!nvme_files[@]}" 00:03:00.692 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:03:00.692 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:00.692 + for nvme in "${!nvme_files[@]}" 00:03:00.692 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:03:00.692 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:00.692 + for nvme in "${!nvme_files[@]}" 00:03:00.692 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:03:00.692 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:00.692 + for nvme in "${!nvme_files[@]}" 00:03:00.692 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:03:00.950 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:00.950 + for nvme in "${!nvme_files[@]}" 00:03:00.950 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:03:00.950 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:00.950 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:03:00.950 + echo 'End stage prepare_nvme.sh' 00:03:00.950 End stage prepare_nvme.sh 00:03:00.961 [Pipeline] sh 00:03:01.242 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:01.242 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:03:01.242 00:03:01.242 DIR=/var/jenkins/workspace/iscsi-vg-autotest_2/spdk/scripts/vagrant 00:03:01.242 SPDK_DIR=/var/jenkins/workspace/iscsi-vg-autotest_2/spdk 00:03:01.242 VAGRANT_TARGET=/var/jenkins/workspace/iscsi-vg-autotest_2 00:03:01.242 HELP=0 00:03:01.242 DRY_RUN=0 00:03:01.242 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:03:01.242 NVME_DISKS_TYPE=nvme,nvme, 00:03:01.242 NVME_AUTO_CREATE=0 00:03:01.242 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:03:01.242 NVME_CMB=,, 00:03:01.242 NVME_PMR=,, 00:03:01.242 NVME_ZNS=,, 00:03:01.242 NVME_MS=,, 00:03:01.242 NVME_FDP=,, 00:03:01.242 SPDK_VAGRANT_DISTRO=fedora38 00:03:01.242 SPDK_VAGRANT_VMCPU=10 00:03:01.242 SPDK_VAGRANT_VMRAM=12288 00:03:01.242 SPDK_VAGRANT_PROVIDER=libvirt 00:03:01.242 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:01.242 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:01.242 SPDK_OPENSTACK_NETWORK=0 00:03:01.242 VAGRANT_PACKAGE_BOX=0 00:03:01.242 VAGRANTFILE=/var/jenkins/workspace/iscsi-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:03:01.242 FORCE_DISTRO=true 00:03:01.242 VAGRANT_BOX_VERSION= 00:03:01.242 EXTRA_VAGRANTFILES= 00:03:01.242 NIC_MODEL=e1000 00:03:01.242 00:03:01.242 mkdir: created directory '/var/jenkins/workspace/iscsi-vg-autotest_2/fedora38-libvirt' 00:03:01.242 /var/jenkins/workspace/iscsi-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/iscsi-vg-autotest_2 00:03:03.778 Bringing machine 'default' up with 'libvirt' provider... 00:03:05.160 ==> default: Creating image (snapshot of base box volume). 00:03:05.418 ==> default: Creating domain with the following settings... 00:03:05.418 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721926436_8c27f1f5cffaa9b49c4b 00:03:05.418 ==> default: -- Domain type: kvm 00:03:05.418 ==> default: -- Cpus: 10 00:03:05.418 ==> default: -- Feature: acpi 00:03:05.418 ==> default: -- Feature: apic 00:03:05.418 ==> default: -- Feature: pae 00:03:05.418 ==> default: -- Memory: 12288M 00:03:05.418 ==> default: -- Memory Backing: hugepages: 00:03:05.418 ==> default: -- Management MAC: 00:03:05.418 ==> default: -- Loader: 00:03:05.418 ==> default: -- Nvram: 00:03:05.418 ==> default: -- Base box: spdk/fedora38 00:03:05.418 ==> default: -- Storage pool: default 00:03:05.418 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721926436_8c27f1f5cffaa9b49c4b.img (20G) 00:03:05.418 ==> default: -- Volume Cache: default 00:03:05.418 ==> default: -- Kernel: 00:03:05.418 ==> default: -- Initrd: 00:03:05.418 ==> default: -- Graphics Type: vnc 00:03:05.418 ==> default: -- Graphics Port: -1 00:03:05.418 ==> default: -- Graphics IP: 127.0.0.1 00:03:05.418 ==> default: -- Graphics Password: Not defined 00:03:05.418 ==> default: -- Video Type: cirrus 00:03:05.418 ==> default: -- Video VRAM: 9216 00:03:05.418 ==> default: -- Sound Type: 00:03:05.418 ==> default: -- Keymap: en-us 00:03:05.418 ==> default: -- TPM Path: 00:03:05.418 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:05.418 ==> default: -- Command line args: 00:03:05.418 ==> default: -> value=-device, 00:03:05.418 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:05.418 ==> default: -> value=-drive, 00:03:05.418 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:03:05.418 ==> default: -> value=-device, 00:03:05.418 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:05.418 ==> default: -> value=-device, 00:03:05.418 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:05.418 ==> default: -> value=-drive, 00:03:05.418 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:05.418 ==> default: -> value=-device, 00:03:05.418 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:05.418 ==> default: -> value=-drive, 00:03:05.418 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:05.418 ==> default: -> value=-device, 00:03:05.418 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:05.418 ==> default: -> value=-drive, 00:03:05.418 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:05.418 ==> default: -> value=-device, 00:03:05.418 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:05.986 ==> default: Creating shared folders metadata... 00:03:05.986 ==> default: Starting domain. 00:03:07.908 ==> default: Waiting for domain to get an IP address... 00:03:25.998 ==> default: Waiting for SSH to become available... 00:03:25.998 ==> default: Configuring and enabling network interfaces... 00:03:31.324 default: SSH address: 192.168.121.33:22 00:03:31.324 default: SSH username: vagrant 00:03:31.324 default: SSH auth method: private key 00:03:33.857 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:41.978 ==> default: Mounting SSHFS shared folder... 00:03:44.538 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:03:44.538 ==> default: Checking Mount.. 00:03:45.914 ==> default: Folder Successfully Mounted! 00:03:45.914 ==> default: Running provisioner: file... 00:03:46.850 default: ~/.gitconfig => .gitconfig 00:03:47.417 00:03:47.417 SUCCESS! 00:03:47.417 00:03:47.417 cd to /var/jenkins/workspace/iscsi-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:03:47.417 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:47.417 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/iscsi-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:03:47.417 00:03:47.425 [Pipeline] } 00:03:47.442 [Pipeline] // stage 00:03:47.450 [Pipeline] dir 00:03:47.450 Running in /var/jenkins/workspace/iscsi-vg-autotest_2/fedora38-libvirt 00:03:47.452 [Pipeline] { 00:03:47.463 [Pipeline] catchError 00:03:47.465 [Pipeline] { 00:03:47.478 [Pipeline] sh 00:03:47.760 + vagrant ssh-config --host vagrant 00:03:47.760 + sed -ne /^Host/,$p 00:03:47.760 + tee ssh_conf 00:03:51.043 Host vagrant 00:03:51.043 HostName 192.168.121.33 00:03:51.043 User vagrant 00:03:51.043 Port 22 00:03:51.043 UserKnownHostsFile /dev/null 00:03:51.043 StrictHostKeyChecking no 00:03:51.043 PasswordAuthentication no 00:03:51.043 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:03:51.043 IdentitiesOnly yes 00:03:51.043 LogLevel FATAL 00:03:51.043 ForwardAgent yes 00:03:51.043 ForwardX11 yes 00:03:51.043 00:03:51.057 [Pipeline] withEnv 00:03:51.059 [Pipeline] { 00:03:51.096 [Pipeline] sh 00:03:51.373 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:51.373 source /etc/os-release 00:03:51.373 [[ -e /image.version ]] && img=$(< /image.version) 00:03:51.373 # Minimal, systemd-like check. 00:03:51.373 if [[ -e /.dockerenv ]]; then 00:03:51.373 # Clear garbage from the node's name: 00:03:51.373 # agt-er_autotest_547-896 -> autotest_547-896 00:03:51.373 # $HOSTNAME is the actual container id 00:03:51.373 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:51.373 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:51.373 # We can assume this is a mount from a host where container is running, 00:03:51.373 # so fetch its hostname to easily identify the target swarm worker. 00:03:51.373 container="$(< /etc/hostname) ($agent)" 00:03:51.373 else 00:03:51.373 # Fallback 00:03:51.373 container=$agent 00:03:51.373 fi 00:03:51.373 fi 00:03:51.373 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:51.373 00:03:51.644 [Pipeline] } 00:03:51.665 [Pipeline] // withEnv 00:03:51.673 [Pipeline] setCustomBuildProperty 00:03:51.688 [Pipeline] stage 00:03:51.690 [Pipeline] { (Tests) 00:03:51.706 [Pipeline] sh 00:03:51.988 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:52.261 [Pipeline] sh 00:03:52.586 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:52.861 [Pipeline] timeout 00:03:52.861 Timeout set to expire in 45 min 00:03:52.863 [Pipeline] { 00:03:52.879 [Pipeline] sh 00:03:53.160 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:53.728 HEAD is now at 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:03:53.741 [Pipeline] sh 00:03:54.019 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:54.292 [Pipeline] sh 00:03:54.575 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:54.852 [Pipeline] sh 00:03:55.132 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=iscsi-vg-autotest ./autoruner.sh spdk_repo 00:03:55.392 ++ readlink -f spdk_repo 00:03:55.392 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:55.392 + [[ -n /home/vagrant/spdk_repo ]] 00:03:55.392 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:55.392 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:55.392 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:55.392 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:55.392 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:55.392 + [[ iscsi-vg-autotest == pkgdep-* ]] 00:03:55.392 + cd /home/vagrant/spdk_repo 00:03:55.392 + source /etc/os-release 00:03:55.392 ++ NAME='Fedora Linux' 00:03:55.392 ++ VERSION='38 (Cloud Edition)' 00:03:55.392 ++ ID=fedora 00:03:55.392 ++ VERSION_ID=38 00:03:55.392 ++ VERSION_CODENAME= 00:03:55.392 ++ PLATFORM_ID=platform:f38 00:03:55.392 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:03:55.392 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:55.392 ++ LOGO=fedora-logo-icon 00:03:55.392 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:03:55.392 ++ HOME_URL=https://fedoraproject.org/ 00:03:55.392 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:03:55.392 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:55.392 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:55.392 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:55.392 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:03:55.392 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:55.392 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:03:55.392 ++ SUPPORT_END=2024-05-14 00:03:55.392 ++ VARIANT='Cloud Edition' 00:03:55.392 ++ VARIANT_ID=cloud 00:03:55.392 + uname -a 00:03:55.392 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:03:55.392 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:55.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.960 Hugepages 00:03:55.960 node hugesize free / total 00:03:55.960 node0 1048576kB 0 / 0 00:03:55.960 node0 2048kB 0 / 0 00:03:55.960 00:03:55.960 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:55.960 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:55.960 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:55.960 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:55.960 + rm -f /tmp/spdk-ld-path 00:03:55.960 + source autorun-spdk.conf 00:03:55.960 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:55.960 ++ SPDK_TEST_ISCSI_INITIATOR=1 00:03:55.960 ++ SPDK_TEST_ISCSI=1 00:03:55.960 ++ SPDK_TEST_RBD=1 00:03:55.960 ++ SPDK_RUN_UBSAN=1 00:03:55.960 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:55.960 ++ RUN_NIGHTLY=0 00:03:55.960 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:55.960 + [[ -n '' ]] 00:03:55.960 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:55.960 + for M in /var/spdk/build-*-manifest.txt 00:03:55.960 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:55.960 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:55.960 + for M in /var/spdk/build-*-manifest.txt 00:03:55.960 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:55.960 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:55.960 ++ uname 00:03:55.960 + [[ Linux == \L\i\n\u\x ]] 00:03:55.960 + sudo dmesg -T 00:03:56.220 + sudo dmesg --clear 00:03:56.220 + dmesg_pid=5108 00:03:56.220 + [[ Fedora Linux == FreeBSD ]] 00:03:56.220 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:56.220 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:56.220 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:56.220 + sudo dmesg -Tw 00:03:56.220 + [[ -x /usr/src/fio-static/fio ]] 00:03:56.220 + export FIO_BIN=/usr/src/fio-static/fio 00:03:56.220 + FIO_BIN=/usr/src/fio-static/fio 00:03:56.220 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:56.220 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:56.220 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:56.220 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:56.220 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:56.220 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:56.220 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:56.220 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:56.220 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:56.220 Test configuration: 00:03:56.220 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:56.220 SPDK_TEST_ISCSI_INITIATOR=1 00:03:56.220 SPDK_TEST_ISCSI=1 00:03:56.220 SPDK_TEST_RBD=1 00:03:56.220 SPDK_RUN_UBSAN=1 00:03:56.220 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:56.220 RUN_NIGHTLY=0 16:54:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:56.220 16:54:48 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:56.220 16:54:48 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:56.220 16:54:48 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:56.220 16:54:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.220 16:54:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.220 16:54:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.220 16:54:48 -- paths/export.sh@5 -- $ export PATH 00:03:56.220 16:54:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.220 16:54:48 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:56.220 16:54:48 -- common/autobuild_common.sh@447 -- $ date +%s 00:03:56.220 16:54:48 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721926488.XXXXXX 00:03:56.220 16:54:48 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721926488.0EYCvS 00:03:56.220 16:54:48 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:03:56.220 16:54:48 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:03:56.220 16:54:48 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:56.220 16:54:48 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:56.220 16:54:48 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:56.220 16:54:48 -- common/autobuild_common.sh@463 -- $ get_config_params 00:03:56.220 16:54:48 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:03:56.220 16:54:48 -- common/autotest_common.sh@10 -- $ set +x 00:03:56.220 16:54:48 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-coverage --with-ublk' 00:03:56.220 16:54:48 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:03:56.220 16:54:48 -- pm/common@17 -- $ local monitor 00:03:56.220 16:54:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:56.220 16:54:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:56.220 16:54:48 -- pm/common@25 -- $ sleep 1 00:03:56.220 16:54:48 -- pm/common@21 -- $ date +%s 00:03:56.220 16:54:48 -- pm/common@21 -- $ date +%s 00:03:56.220 16:54:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721926488 00:03:56.220 16:54:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721926488 00:03:56.479 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721926488_collect-vmstat.pm.log 00:03:56.479 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721926488_collect-cpu-load.pm.log 00:03:57.502 16:54:49 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:03:57.502 16:54:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:57.502 16:54:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:57.502 16:54:49 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:57.502 16:54:49 -- spdk/autobuild.sh@16 -- $ date -u 00:03:57.502 Thu Jul 25 04:54:49 PM UTC 2024 00:03:57.502 16:54:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:57.502 v24.09-pre-321-g704257090 00:03:57.502 16:54:49 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:57.502 16:54:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:57.502 16:54:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:57.502 16:54:49 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:57.502 16:54:49 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:57.502 16:54:49 -- common/autotest_common.sh@10 -- $ set +x 00:03:57.502 ************************************ 00:03:57.502 START TEST ubsan 00:03:57.502 ************************************ 00:03:57.502 using ubsan 00:03:57.502 16:54:49 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:57.502 00:03:57.502 real 0m0.000s 00:03:57.502 user 0m0.000s 00:03:57.502 sys 0m0.000s 00:03:57.502 16:54:49 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:57.502 16:54:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:57.502 ************************************ 00:03:57.502 END TEST ubsan 00:03:57.502 ************************************ 00:03:57.502 16:54:49 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:57.502 16:54:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:57.502 16:54:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:57.502 16:54:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:57.502 16:54:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:57.502 16:54:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:57.502 16:54:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:57.502 16:54:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:57.502 16:54:49 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-coverage --with-ublk --with-shared 00:03:57.502 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:57.502 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:58.070 Using 'verbs' RDMA provider 00:04:14.334 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:32.437 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:32.437 Creating mk/config.mk...done. 00:04:32.437 Creating mk/cc.flags.mk...done. 00:04:32.437 Type 'make' to build. 00:04:32.437 16:55:22 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:32.437 16:55:22 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:32.437 16:55:22 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:32.437 16:55:22 -- common/autotest_common.sh@10 -- $ set +x 00:04:32.437 ************************************ 00:04:32.437 START TEST make 00:04:32.437 ************************************ 00:04:32.437 16:55:22 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:32.437 make[1]: Nothing to be done for 'all'. 00:04:40.556 The Meson build system 00:04:40.556 Version: 1.3.1 00:04:40.556 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:40.556 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:40.556 Build type: native build 00:04:40.556 Program cat found: YES (/usr/bin/cat) 00:04:40.556 Project name: DPDK 00:04:40.556 Project version: 24.03.0 00:04:40.556 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:40.556 C linker for the host machine: cc ld.bfd 2.39-16 00:04:40.556 Host machine cpu family: x86_64 00:04:40.556 Host machine cpu: x86_64 00:04:40.556 Message: ## Building in Developer Mode ## 00:04:40.556 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:40.556 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:40.556 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:40.556 Program python3 found: YES (/usr/bin/python3) 00:04:40.556 Program cat found: YES (/usr/bin/cat) 00:04:40.556 Compiler for C supports arguments -march=native: YES 00:04:40.556 Checking for size of "void *" : 8 00:04:40.556 Checking for size of "void *" : 8 (cached) 00:04:40.556 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:04:40.556 Library m found: YES 00:04:40.556 Library numa found: YES 00:04:40.556 Has header "numaif.h" : YES 00:04:40.556 Library fdt found: NO 00:04:40.556 Library execinfo found: NO 00:04:40.556 Has header "execinfo.h" : YES 00:04:40.556 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:40.556 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:40.556 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:40.556 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:40.556 Run-time dependency openssl found: YES 3.0.9 00:04:40.556 Run-time dependency libpcap found: YES 1.10.4 00:04:40.556 Has header "pcap.h" with dependency libpcap: YES 00:04:40.556 Compiler for C supports arguments -Wcast-qual: YES 00:04:40.556 Compiler for C supports arguments -Wdeprecated: YES 00:04:40.556 Compiler for C supports arguments -Wformat: YES 00:04:40.556 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:40.556 Compiler for C supports arguments -Wformat-security: NO 00:04:40.556 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:40.556 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:40.556 Compiler for C supports arguments -Wnested-externs: YES 00:04:40.556 Compiler for C supports arguments -Wold-style-definition: YES 00:04:40.556 Compiler for C supports arguments -Wpointer-arith: YES 00:04:40.556 Compiler for C supports arguments -Wsign-compare: YES 00:04:40.556 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:40.556 Compiler for C supports arguments -Wundef: YES 00:04:40.556 Compiler for C supports arguments -Wwrite-strings: YES 00:04:40.556 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:40.556 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:40.556 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:40.556 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:40.556 Program objdump found: YES (/usr/bin/objdump) 00:04:40.556 Compiler for C supports arguments -mavx512f: YES 00:04:40.556 Checking if "AVX512 checking" compiles: YES 00:04:40.556 Fetching value of define "__SSE4_2__" : 1 00:04:40.556 Fetching value of define "__AES__" : 1 00:04:40.556 Fetching value of define "__AVX__" : 1 00:04:40.556 Fetching value of define "__AVX2__" : 1 00:04:40.556 Fetching value of define "__AVX512BW__" : 1 00:04:40.556 Fetching value of define "__AVX512CD__" : 1 00:04:40.556 Fetching value of define "__AVX512DQ__" : 1 00:04:40.556 Fetching value of define "__AVX512F__" : 1 00:04:40.556 Fetching value of define "__AVX512VL__" : 1 00:04:40.556 Fetching value of define "__PCLMUL__" : 1 00:04:40.556 Fetching value of define "__RDRND__" : 1 00:04:40.556 Fetching value of define "__RDSEED__" : 1 00:04:40.556 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:40.556 Fetching value of define "__znver1__" : (undefined) 00:04:40.556 Fetching value of define "__znver2__" : (undefined) 00:04:40.556 Fetching value of define "__znver3__" : (undefined) 00:04:40.556 Fetching value of define "__znver4__" : (undefined) 00:04:40.556 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:40.556 Message: lib/log: Defining dependency "log" 00:04:40.556 Message: lib/kvargs: Defining dependency "kvargs" 00:04:40.556 Message: lib/telemetry: Defining dependency "telemetry" 00:04:40.556 Checking for function "getentropy" : NO 00:04:40.556 Message: lib/eal: Defining dependency "eal" 00:04:40.556 Message: lib/ring: Defining dependency "ring" 00:04:40.556 Message: lib/rcu: Defining dependency "rcu" 00:04:40.556 Message: lib/mempool: Defining dependency "mempool" 00:04:40.556 Message: lib/mbuf: Defining dependency "mbuf" 00:04:40.556 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:40.556 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:40.556 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:40.556 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:40.556 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:40.556 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:40.556 Compiler for C supports arguments -mpclmul: YES 00:04:40.556 Compiler for C supports arguments -maes: YES 00:04:40.556 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:40.556 Compiler for C supports arguments -mavx512bw: YES 00:04:40.556 Compiler for C supports arguments -mavx512dq: YES 00:04:40.556 Compiler for C supports arguments -mavx512vl: YES 00:04:40.556 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:40.556 Compiler for C supports arguments -mavx2: YES 00:04:40.556 Compiler for C supports arguments -mavx: YES 00:04:40.556 Message: lib/net: Defining dependency "net" 00:04:40.556 Message: lib/meter: Defining dependency "meter" 00:04:40.556 Message: lib/ethdev: Defining dependency "ethdev" 00:04:40.556 Message: lib/pci: Defining dependency "pci" 00:04:40.556 Message: lib/cmdline: Defining dependency "cmdline" 00:04:40.556 Message: lib/hash: Defining dependency "hash" 00:04:40.556 Message: lib/timer: Defining dependency "timer" 00:04:40.556 Message: lib/compressdev: Defining dependency "compressdev" 00:04:40.556 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:40.556 Message: lib/dmadev: Defining dependency "dmadev" 00:04:40.556 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:40.556 Message: lib/power: Defining dependency "power" 00:04:40.556 Message: lib/reorder: Defining dependency "reorder" 00:04:40.556 Message: lib/security: Defining dependency "security" 00:04:40.556 Has header "linux/userfaultfd.h" : YES 00:04:40.556 Has header "linux/vduse.h" : YES 00:04:40.556 Message: lib/vhost: Defining dependency "vhost" 00:04:40.556 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:40.556 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:40.556 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:40.556 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:40.556 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:40.556 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:40.556 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:40.556 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:40.556 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:40.556 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:40.556 Program doxygen found: YES (/usr/bin/doxygen) 00:04:40.556 Configuring doxy-api-html.conf using configuration 00:04:40.556 Configuring doxy-api-man.conf using configuration 00:04:40.556 Program mandb found: YES (/usr/bin/mandb) 00:04:40.556 Program sphinx-build found: NO 00:04:40.556 Configuring rte_build_config.h using configuration 00:04:40.556 Message: 00:04:40.556 ================= 00:04:40.556 Applications Enabled 00:04:40.556 ================= 00:04:40.556 00:04:40.556 apps: 00:04:40.556 00:04:40.556 00:04:40.556 Message: 00:04:40.556 ================= 00:04:40.556 Libraries Enabled 00:04:40.556 ================= 00:04:40.556 00:04:40.556 libs: 00:04:40.556 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:40.556 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:40.556 cryptodev, dmadev, power, reorder, security, vhost, 00:04:40.556 00:04:40.556 Message: 00:04:40.556 =============== 00:04:40.556 Drivers Enabled 00:04:40.556 =============== 00:04:40.556 00:04:40.556 common: 00:04:40.556 00:04:40.556 bus: 00:04:40.556 pci, vdev, 00:04:40.556 mempool: 00:04:40.556 ring, 00:04:40.556 dma: 00:04:40.556 00:04:40.556 net: 00:04:40.556 00:04:40.556 crypto: 00:04:40.556 00:04:40.556 compress: 00:04:40.556 00:04:40.556 vdpa: 00:04:40.556 00:04:40.556 00:04:40.556 Message: 00:04:40.556 ================= 00:04:40.556 Content Skipped 00:04:40.556 ================= 00:04:40.556 00:04:40.556 apps: 00:04:40.556 dumpcap: explicitly disabled via build config 00:04:40.556 graph: explicitly disabled via build config 00:04:40.556 pdump: explicitly disabled via build config 00:04:40.556 proc-info: explicitly disabled via build config 00:04:40.556 test-acl: explicitly disabled via build config 00:04:40.556 test-bbdev: explicitly disabled via build config 00:04:40.556 test-cmdline: explicitly disabled via build config 00:04:40.556 test-compress-perf: explicitly disabled via build config 00:04:40.556 test-crypto-perf: explicitly disabled via build config 00:04:40.556 test-dma-perf: explicitly disabled via build config 00:04:40.556 test-eventdev: explicitly disabled via build config 00:04:40.556 test-fib: explicitly disabled via build config 00:04:40.556 test-flow-perf: explicitly disabled via build config 00:04:40.556 test-gpudev: explicitly disabled via build config 00:04:40.556 test-mldev: explicitly disabled via build config 00:04:40.556 test-pipeline: explicitly disabled via build config 00:04:40.556 test-pmd: explicitly disabled via build config 00:04:40.556 test-regex: explicitly disabled via build config 00:04:40.556 test-sad: explicitly disabled via build config 00:04:40.556 test-security-perf: explicitly disabled via build config 00:04:40.556 00:04:40.556 libs: 00:04:40.556 argparse: explicitly disabled via build config 00:04:40.556 metrics: explicitly disabled via build config 00:04:40.556 acl: explicitly disabled via build config 00:04:40.556 bbdev: explicitly disabled via build config 00:04:40.556 bitratestats: explicitly disabled via build config 00:04:40.556 bpf: explicitly disabled via build config 00:04:40.556 cfgfile: explicitly disabled via build config 00:04:40.556 distributor: explicitly disabled via build config 00:04:40.556 efd: explicitly disabled via build config 00:04:40.556 eventdev: explicitly disabled via build config 00:04:40.556 dispatcher: explicitly disabled via build config 00:04:40.556 gpudev: explicitly disabled via build config 00:04:40.556 gro: explicitly disabled via build config 00:04:40.556 gso: explicitly disabled via build config 00:04:40.556 ip_frag: explicitly disabled via build config 00:04:40.556 jobstats: explicitly disabled via build config 00:04:40.556 latencystats: explicitly disabled via build config 00:04:40.556 lpm: explicitly disabled via build config 00:04:40.556 member: explicitly disabled via build config 00:04:40.556 pcapng: explicitly disabled via build config 00:04:40.556 rawdev: explicitly disabled via build config 00:04:40.556 regexdev: explicitly disabled via build config 00:04:40.556 mldev: explicitly disabled via build config 00:04:40.557 rib: explicitly disabled via build config 00:04:40.557 sched: explicitly disabled via build config 00:04:40.557 stack: explicitly disabled via build config 00:04:40.557 ipsec: explicitly disabled via build config 00:04:40.557 pdcp: explicitly disabled via build config 00:04:40.557 fib: explicitly disabled via build config 00:04:40.557 port: explicitly disabled via build config 00:04:40.557 pdump: explicitly disabled via build config 00:04:40.557 table: explicitly disabled via build config 00:04:40.557 pipeline: explicitly disabled via build config 00:04:40.557 graph: explicitly disabled via build config 00:04:40.557 node: explicitly disabled via build config 00:04:40.557 00:04:40.557 drivers: 00:04:40.557 common/cpt: not in enabled drivers build config 00:04:40.557 common/dpaax: not in enabled drivers build config 00:04:40.557 common/iavf: not in enabled drivers build config 00:04:40.557 common/idpf: not in enabled drivers build config 00:04:40.557 common/ionic: not in enabled drivers build config 00:04:40.557 common/mvep: not in enabled drivers build config 00:04:40.557 common/octeontx: not in enabled drivers build config 00:04:40.557 bus/auxiliary: not in enabled drivers build config 00:04:40.557 bus/cdx: not in enabled drivers build config 00:04:40.557 bus/dpaa: not in enabled drivers build config 00:04:40.557 bus/fslmc: not in enabled drivers build config 00:04:40.557 bus/ifpga: not in enabled drivers build config 00:04:40.557 bus/platform: not in enabled drivers build config 00:04:40.557 bus/uacce: not in enabled drivers build config 00:04:40.557 bus/vmbus: not in enabled drivers build config 00:04:40.557 common/cnxk: not in enabled drivers build config 00:04:40.557 common/mlx5: not in enabled drivers build config 00:04:40.557 common/nfp: not in enabled drivers build config 00:04:40.557 common/nitrox: not in enabled drivers build config 00:04:40.557 common/qat: not in enabled drivers build config 00:04:40.557 common/sfc_efx: not in enabled drivers build config 00:04:40.557 mempool/bucket: not in enabled drivers build config 00:04:40.557 mempool/cnxk: not in enabled drivers build config 00:04:40.557 mempool/dpaa: not in enabled drivers build config 00:04:40.557 mempool/dpaa2: not in enabled drivers build config 00:04:40.557 mempool/octeontx: not in enabled drivers build config 00:04:40.557 mempool/stack: not in enabled drivers build config 00:04:40.557 dma/cnxk: not in enabled drivers build config 00:04:40.557 dma/dpaa: not in enabled drivers build config 00:04:40.557 dma/dpaa2: not in enabled drivers build config 00:04:40.557 dma/hisilicon: not in enabled drivers build config 00:04:40.557 dma/idxd: not in enabled drivers build config 00:04:40.557 dma/ioat: not in enabled drivers build config 00:04:40.557 dma/skeleton: not in enabled drivers build config 00:04:40.557 net/af_packet: not in enabled drivers build config 00:04:40.557 net/af_xdp: not in enabled drivers build config 00:04:40.557 net/ark: not in enabled drivers build config 00:04:40.557 net/atlantic: not in enabled drivers build config 00:04:40.557 net/avp: not in enabled drivers build config 00:04:40.557 net/axgbe: not in enabled drivers build config 00:04:40.557 net/bnx2x: not in enabled drivers build config 00:04:40.557 net/bnxt: not in enabled drivers build config 00:04:40.557 net/bonding: not in enabled drivers build config 00:04:40.557 net/cnxk: not in enabled drivers build config 00:04:40.557 net/cpfl: not in enabled drivers build config 00:04:40.557 net/cxgbe: not in enabled drivers build config 00:04:40.557 net/dpaa: not in enabled drivers build config 00:04:40.557 net/dpaa2: not in enabled drivers build config 00:04:40.557 net/e1000: not in enabled drivers build config 00:04:40.557 net/ena: not in enabled drivers build config 00:04:40.557 net/enetc: not in enabled drivers build config 00:04:40.557 net/enetfec: not in enabled drivers build config 00:04:40.557 net/enic: not in enabled drivers build config 00:04:40.557 net/failsafe: not in enabled drivers build config 00:04:40.557 net/fm10k: not in enabled drivers build config 00:04:40.557 net/gve: not in enabled drivers build config 00:04:40.557 net/hinic: not in enabled drivers build config 00:04:40.557 net/hns3: not in enabled drivers build config 00:04:40.557 net/i40e: not in enabled drivers build config 00:04:40.557 net/iavf: not in enabled drivers build config 00:04:40.557 net/ice: not in enabled drivers build config 00:04:40.557 net/idpf: not in enabled drivers build config 00:04:40.557 net/igc: not in enabled drivers build config 00:04:40.557 net/ionic: not in enabled drivers build config 00:04:40.557 net/ipn3ke: not in enabled drivers build config 00:04:40.557 net/ixgbe: not in enabled drivers build config 00:04:40.557 net/mana: not in enabled drivers build config 00:04:40.557 net/memif: not in enabled drivers build config 00:04:40.557 net/mlx4: not in enabled drivers build config 00:04:40.557 net/mlx5: not in enabled drivers build config 00:04:40.557 net/mvneta: not in enabled drivers build config 00:04:40.557 net/mvpp2: not in enabled drivers build config 00:04:40.557 net/netvsc: not in enabled drivers build config 00:04:40.557 net/nfb: not in enabled drivers build config 00:04:40.557 net/nfp: not in enabled drivers build config 00:04:40.557 net/ngbe: not in enabled drivers build config 00:04:40.557 net/null: not in enabled drivers build config 00:04:40.557 net/octeontx: not in enabled drivers build config 00:04:40.557 net/octeon_ep: not in enabled drivers build config 00:04:40.557 net/pcap: not in enabled drivers build config 00:04:40.557 net/pfe: not in enabled drivers build config 00:04:40.557 net/qede: not in enabled drivers build config 00:04:40.557 net/ring: not in enabled drivers build config 00:04:40.557 net/sfc: not in enabled drivers build config 00:04:40.557 net/softnic: not in enabled drivers build config 00:04:40.557 net/tap: not in enabled drivers build config 00:04:40.557 net/thunderx: not in enabled drivers build config 00:04:40.557 net/txgbe: not in enabled drivers build config 00:04:40.557 net/vdev_netvsc: not in enabled drivers build config 00:04:40.557 net/vhost: not in enabled drivers build config 00:04:40.557 net/virtio: not in enabled drivers build config 00:04:40.557 net/vmxnet3: not in enabled drivers build config 00:04:40.557 raw/*: missing internal dependency, "rawdev" 00:04:40.557 crypto/armv8: not in enabled drivers build config 00:04:40.557 crypto/bcmfs: not in enabled drivers build config 00:04:40.557 crypto/caam_jr: not in enabled drivers build config 00:04:40.557 crypto/ccp: not in enabled drivers build config 00:04:40.557 crypto/cnxk: not in enabled drivers build config 00:04:40.557 crypto/dpaa_sec: not in enabled drivers build config 00:04:40.557 crypto/dpaa2_sec: not in enabled drivers build config 00:04:40.557 crypto/ipsec_mb: not in enabled drivers build config 00:04:40.557 crypto/mlx5: not in enabled drivers build config 00:04:40.557 crypto/mvsam: not in enabled drivers build config 00:04:40.557 crypto/nitrox: not in enabled drivers build config 00:04:40.557 crypto/null: not in enabled drivers build config 00:04:40.557 crypto/octeontx: not in enabled drivers build config 00:04:40.557 crypto/openssl: not in enabled drivers build config 00:04:40.557 crypto/scheduler: not in enabled drivers build config 00:04:40.557 crypto/uadk: not in enabled drivers build config 00:04:40.557 crypto/virtio: not in enabled drivers build config 00:04:40.557 compress/isal: not in enabled drivers build config 00:04:40.557 compress/mlx5: not in enabled drivers build config 00:04:40.557 compress/nitrox: not in enabled drivers build config 00:04:40.557 compress/octeontx: not in enabled drivers build config 00:04:40.557 compress/zlib: not in enabled drivers build config 00:04:40.557 regex/*: missing internal dependency, "regexdev" 00:04:40.557 ml/*: missing internal dependency, "mldev" 00:04:40.557 vdpa/ifc: not in enabled drivers build config 00:04:40.557 vdpa/mlx5: not in enabled drivers build config 00:04:40.557 vdpa/nfp: not in enabled drivers build config 00:04:40.557 vdpa/sfc: not in enabled drivers build config 00:04:40.557 event/*: missing internal dependency, "eventdev" 00:04:40.557 baseband/*: missing internal dependency, "bbdev" 00:04:40.557 gpu/*: missing internal dependency, "gpudev" 00:04:40.557 00:04:40.557 00:04:40.557 Build targets in project: 85 00:04:40.557 00:04:40.557 DPDK 24.03.0 00:04:40.557 00:04:40.557 User defined options 00:04:40.557 buildtype : debug 00:04:40.557 default_library : shared 00:04:40.557 libdir : lib 00:04:40.557 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:40.557 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:40.557 c_link_args : 00:04:40.557 cpu_instruction_set: native 00:04:40.557 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:40.557 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:40.557 enable_docs : false 00:04:40.557 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:40.557 enable_kmods : false 00:04:40.557 max_lcores : 128 00:04:40.557 tests : false 00:04:40.557 00:04:40.557 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:40.557 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:40.557 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:40.557 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:40.557 [3/268] Linking static target lib/librte_kvargs.a 00:04:40.557 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:40.557 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:40.557 [6/268] Linking static target lib/librte_log.a 00:04:40.557 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.557 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:40.557 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:40.557 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:40.815 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:40.816 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:40.816 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:40.816 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:40.816 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:40.816 [16/268] Linking static target lib/librte_telemetry.a 00:04:40.816 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:40.816 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:41.073 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.073 [20/268] Linking target lib/librte_log.so.24.1 00:04:41.331 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:41.331 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:41.331 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:41.331 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:41.331 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:41.331 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:41.331 [27/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:41.331 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:41.331 [29/268] Linking target lib/librte_kvargs.so.24.1 00:04:41.589 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:41.589 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:41.589 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:41.589 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.589 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:41.846 [35/268] Linking target lib/librte_telemetry.so.24.1 00:04:41.846 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:41.846 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:41.846 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:41.846 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:41.846 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:41.846 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:41.846 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:41.846 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:42.104 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:42.104 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:42.104 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:42.104 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:42.362 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:42.363 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:42.363 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:42.626 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:42.626 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:42.626 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:42.626 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:42.626 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:42.626 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:42.885 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:42.885 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:42.885 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:42.885 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:42.885 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:42.885 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:43.143 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:43.143 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:43.143 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:43.143 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:43.143 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:43.402 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:43.402 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:43.402 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:43.660 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:43.660 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:43.660 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:43.660 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:43.660 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:43.660 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:43.660 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:43.660 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:43.918 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:43.918 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:43.918 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:43.918 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:43.918 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:44.177 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:44.177 [85/268] Linking static target lib/librte_eal.a 00:04:44.177 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:44.177 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:44.435 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:44.435 [89/268] Linking static target lib/librte_ring.a 00:04:44.435 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:44.435 [91/268] Linking static target lib/librte_rcu.a 00:04:44.435 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:44.435 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:44.435 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:44.693 [95/268] Linking static target lib/librte_mempool.a 00:04:44.694 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:44.951 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:44.951 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:44.951 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:44.951 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:44.951 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.210 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:45.210 [103/268] Linking static target lib/librte_mbuf.a 00:04:45.210 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:45.210 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:45.210 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:45.477 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:45.477 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:45.477 [109/268] Linking static target lib/librte_net.a 00:04:45.477 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:45.744 [111/268] Linking static target lib/librte_meter.a 00:04:45.744 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:46.002 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:46.002 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:46.002 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.002 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.002 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.260 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:46.260 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.517 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:46.775 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:46.775 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:46.775 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:47.032 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:47.292 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:47.292 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:47.292 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:47.292 [128/268] Linking static target lib/librte_pci.a 00:04:47.292 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:47.292 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:47.292 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:47.574 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:47.574 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:47.574 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:47.574 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:47.574 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:47.574 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:47.574 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:47.574 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:47.574 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:47.574 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:47.574 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:47.574 [143/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:47.574 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:47.574 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:47.848 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:47.848 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:47.848 [148/268] Linking static target lib/librte_cmdline.a 00:04:48.106 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:48.106 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:48.106 [151/268] Linking static target lib/librte_timer.a 00:04:48.106 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:48.106 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:48.364 [154/268] Linking static target lib/librte_ethdev.a 00:04:48.364 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:48.364 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:48.364 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:48.364 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:48.364 [159/268] Linking static target lib/librte_hash.a 00:04:48.364 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:48.364 [161/268] Linking static target lib/librte_compressdev.a 00:04:48.622 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:48.622 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:48.622 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:48.622 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:48.880 [166/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:48.880 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:48.880 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:48.880 [169/268] Linking static target lib/librte_dmadev.a 00:04:49.138 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:49.138 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:49.138 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:49.138 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:49.138 [174/268] Linking static target lib/librte_cryptodev.a 00:04:49.396 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:49.396 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:49.396 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:49.396 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:49.396 [179/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:49.655 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:49.655 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:49.655 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:49.655 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:49.913 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:49.913 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:49.913 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:49.913 [187/268] Linking static target lib/librte_reorder.a 00:04:49.913 [188/268] Linking static target lib/librte_power.a 00:04:50.171 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:50.171 [190/268] Linking static target lib/librte_security.a 00:04:50.171 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:50.171 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:50.171 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:50.429 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:50.687 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:50.947 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:50.947 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:50.947 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:50.947 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:51.205 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:51.205 [201/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.205 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:51.463 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:51.463 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:51.463 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:51.722 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:51.722 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:51.722 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:51.722 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:51.722 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:51.722 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.722 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:51.981 [213/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:51.982 [214/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:51.982 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:51.982 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:51.982 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:51.982 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:51.982 [219/268] Linking static target drivers/librte_bus_pci.a 00:04:51.982 [220/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:51.982 [221/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:51.982 [222/268] Linking static target drivers/librte_bus_vdev.a 00:04:51.982 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:52.240 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:52.240 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:52.240 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:52.240 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.498 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:53.065 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:53.065 [230/268] Linking static target lib/librte_vhost.a 00:04:55.598 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:57.503 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:57.503 [233/268] Linking target lib/librte_eal.so.24.1 00:04:57.761 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:57.761 [235/268] Linking target lib/librte_meter.so.24.1 00:04:57.761 [236/268] Linking target lib/librte_ring.so.24.1 00:04:57.761 [237/268] Linking target lib/librte_timer.so.24.1 00:04:57.761 [238/268] Linking target lib/librte_pci.so.24.1 00:04:57.761 [239/268] Linking target lib/librte_dmadev.so.24.1 00:04:57.761 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:58.020 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:58.020 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:58.020 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:58.020 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:58.020 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:58.020 [246/268] Linking target lib/librte_rcu.so.24.1 00:04:58.020 [247/268] Linking target lib/librte_mempool.so.24.1 00:04:58.020 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:58.020 [249/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:58.020 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:58.020 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:58.278 [252/268] Linking target lib/librte_mbuf.so.24.1 00:04:58.278 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:58.278 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:58.278 [255/268] Linking target lib/librte_net.so.24.1 00:04:58.278 [256/268] Linking target lib/librte_reorder.so.24.1 00:04:58.278 [257/268] Linking target lib/librte_compressdev.so.24.1 00:04:58.278 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:04:58.535 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:58.535 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:58.535 [261/268] Linking target lib/librte_security.so.24.1 00:04:58.535 [262/268] Linking target lib/librte_cmdline.so.24.1 00:04:58.535 [263/268] Linking target lib/librte_hash.so.24.1 00:04:58.535 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:58.794 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:58.794 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:58.794 [267/268] Linking target lib/librte_power.so.24.1 00:04:58.794 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:58.794 INFO: autodetecting backend as ninja 00:04:58.794 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:00.178 CC lib/log/log.o 00:05:00.178 CC lib/log/log_deprecated.o 00:05:00.178 CC lib/log/log_flags.o 00:05:00.178 CC lib/ut_mock/mock.o 00:05:00.178 CC lib/ut/ut.o 00:05:00.437 LIB libspdk_log.a 00:05:00.437 LIB libspdk_ut_mock.a 00:05:00.437 LIB libspdk_ut.a 00:05:00.437 SO libspdk_ut_mock.so.6.0 00:05:00.437 SO libspdk_log.so.7.0 00:05:00.437 SO libspdk_ut.so.2.0 00:05:00.437 SYMLINK libspdk_ut_mock.so 00:05:00.437 SYMLINK libspdk_ut.so 00:05:00.437 SYMLINK libspdk_log.so 00:05:00.696 CC lib/ioat/ioat.o 00:05:00.696 CC lib/util/base64.o 00:05:00.696 CC lib/util/bit_array.o 00:05:00.696 CC lib/util/cpuset.o 00:05:00.696 CC lib/util/crc16.o 00:05:00.696 CC lib/util/crc32c.o 00:05:00.696 CC lib/util/crc32.o 00:05:00.696 CC lib/dma/dma.o 00:05:00.696 CXX lib/trace_parser/trace.o 00:05:00.955 CC lib/vfio_user/host/vfio_user_pci.o 00:05:00.955 CC lib/util/crc32_ieee.o 00:05:00.955 CC lib/util/crc64.o 00:05:00.955 CC lib/vfio_user/host/vfio_user.o 00:05:00.955 LIB libspdk_dma.a 00:05:00.955 CC lib/util/dif.o 00:05:00.955 SO libspdk_dma.so.4.0 00:05:00.955 CC lib/util/fd.o 00:05:00.955 LIB libspdk_ioat.a 00:05:00.955 CC lib/util/fd_group.o 00:05:00.955 CC lib/util/file.o 00:05:00.955 SO libspdk_ioat.so.7.0 00:05:00.955 SYMLINK libspdk_dma.so 00:05:00.955 CC lib/util/hexlify.o 00:05:01.213 CC lib/util/iov.o 00:05:01.213 SYMLINK libspdk_ioat.so 00:05:01.213 CC lib/util/math.o 00:05:01.213 CC lib/util/net.o 00:05:01.213 LIB libspdk_vfio_user.a 00:05:01.213 CC lib/util/pipe.o 00:05:01.213 SO libspdk_vfio_user.so.5.0 00:05:01.213 CC lib/util/strerror_tls.o 00:05:01.213 CC lib/util/string.o 00:05:01.213 SYMLINK libspdk_vfio_user.so 00:05:01.213 CC lib/util/uuid.o 00:05:01.213 CC lib/util/xor.o 00:05:01.213 CC lib/util/zipf.o 00:05:01.471 LIB libspdk_util.a 00:05:01.728 SO libspdk_util.so.10.0 00:05:01.728 LIB libspdk_trace_parser.a 00:05:01.728 SYMLINK libspdk_util.so 00:05:01.728 SO libspdk_trace_parser.so.5.0 00:05:01.986 SYMLINK libspdk_trace_parser.so 00:05:01.986 CC lib/rdma_provider/common.o 00:05:01.986 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:01.986 CC lib/idxd/idxd.o 00:05:01.986 CC lib/idxd/idxd_user.o 00:05:01.986 CC lib/idxd/idxd_kernel.o 00:05:01.986 CC lib/conf/conf.o 00:05:01.986 CC lib/json/json_parse.o 00:05:01.986 CC lib/rdma_utils/rdma_utils.o 00:05:01.986 CC lib/env_dpdk/env.o 00:05:01.986 CC lib/vmd/vmd.o 00:05:02.244 CC lib/vmd/led.o 00:05:02.244 CC lib/env_dpdk/memory.o 00:05:02.244 LIB libspdk_rdma_provider.a 00:05:02.244 LIB libspdk_conf.a 00:05:02.244 SO libspdk_rdma_provider.so.6.0 00:05:02.244 CC lib/env_dpdk/pci.o 00:05:02.244 SO libspdk_conf.so.6.0 00:05:02.244 CC lib/json/json_util.o 00:05:02.244 SYMLINK libspdk_rdma_provider.so 00:05:02.244 CC lib/json/json_write.o 00:05:02.244 LIB libspdk_rdma_utils.a 00:05:02.244 CC lib/env_dpdk/init.o 00:05:02.244 SO libspdk_rdma_utils.so.1.0 00:05:02.244 SYMLINK libspdk_conf.so 00:05:02.244 CC lib/env_dpdk/threads.o 00:05:02.244 SYMLINK libspdk_rdma_utils.so 00:05:02.244 CC lib/env_dpdk/pci_ioat.o 00:05:02.502 CC lib/env_dpdk/pci_virtio.o 00:05:02.502 LIB libspdk_idxd.a 00:05:02.502 CC lib/env_dpdk/pci_vmd.o 00:05:02.502 CC lib/env_dpdk/pci_idxd.o 00:05:02.502 CC lib/env_dpdk/pci_event.o 00:05:02.502 SO libspdk_idxd.so.12.0 00:05:02.502 LIB libspdk_json.a 00:05:02.502 LIB libspdk_vmd.a 00:05:02.502 CC lib/env_dpdk/sigbus_handler.o 00:05:02.502 SO libspdk_json.so.6.0 00:05:02.502 SO libspdk_vmd.so.6.0 00:05:02.502 SYMLINK libspdk_idxd.so 00:05:02.761 CC lib/env_dpdk/pci_dpdk.o 00:05:02.761 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:02.761 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:02.761 SYMLINK libspdk_json.so 00:05:02.761 SYMLINK libspdk_vmd.so 00:05:03.033 CC lib/jsonrpc/jsonrpc_server.o 00:05:03.033 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:03.033 CC lib/jsonrpc/jsonrpc_client.o 00:05:03.033 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:03.295 LIB libspdk_jsonrpc.a 00:05:03.295 SO libspdk_jsonrpc.so.6.0 00:05:03.295 LIB libspdk_env_dpdk.a 00:05:03.295 SYMLINK libspdk_jsonrpc.so 00:05:03.577 SO libspdk_env_dpdk.so.15.0 00:05:03.577 SYMLINK libspdk_env_dpdk.so 00:05:03.838 CC lib/rpc/rpc.o 00:05:04.098 LIB libspdk_rpc.a 00:05:04.098 SO libspdk_rpc.so.6.0 00:05:04.098 SYMLINK libspdk_rpc.so 00:05:04.713 CC lib/trace/trace.o 00:05:04.713 CC lib/trace/trace_flags.o 00:05:04.713 CC lib/trace/trace_rpc.o 00:05:04.713 CC lib/notify/notify.o 00:05:04.713 CC lib/notify/notify_rpc.o 00:05:04.713 CC lib/keyring/keyring.o 00:05:04.713 CC lib/keyring/keyring_rpc.o 00:05:04.713 LIB libspdk_notify.a 00:05:04.713 SO libspdk_notify.so.6.0 00:05:04.713 LIB libspdk_trace.a 00:05:04.713 LIB libspdk_keyring.a 00:05:04.713 SO libspdk_trace.so.10.0 00:05:04.981 SYMLINK libspdk_notify.so 00:05:04.981 SO libspdk_keyring.so.1.0 00:05:04.981 SYMLINK libspdk_trace.so 00:05:04.981 SYMLINK libspdk_keyring.so 00:05:05.264 CC lib/sock/sock.o 00:05:05.264 CC lib/thread/thread.o 00:05:05.264 CC lib/thread/iobuf.o 00:05:05.264 CC lib/sock/sock_rpc.o 00:05:05.862 LIB libspdk_sock.a 00:05:05.862 SO libspdk_sock.so.10.0 00:05:05.862 SYMLINK libspdk_sock.so 00:05:06.121 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:06.121 CC lib/nvme/nvme_ctrlr.o 00:05:06.121 CC lib/nvme/nvme_fabric.o 00:05:06.121 CC lib/nvme/nvme_ns_cmd.o 00:05:06.121 CC lib/nvme/nvme_ns.o 00:05:06.121 CC lib/nvme/nvme_pcie_common.o 00:05:06.121 CC lib/nvme/nvme_pcie.o 00:05:06.121 CC lib/nvme/nvme.o 00:05:06.121 CC lib/nvme/nvme_qpair.o 00:05:06.686 LIB libspdk_thread.a 00:05:06.686 SO libspdk_thread.so.10.1 00:05:06.945 SYMLINK libspdk_thread.so 00:05:06.945 CC lib/nvme/nvme_quirks.o 00:05:06.945 CC lib/nvme/nvme_transport.o 00:05:06.945 CC lib/nvme/nvme_discovery.o 00:05:06.945 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:06.945 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:07.204 CC lib/nvme/nvme_tcp.o 00:05:07.204 CC lib/accel/accel.o 00:05:07.204 CC lib/blob/blobstore.o 00:05:07.204 CC lib/nvme/nvme_opal.o 00:05:07.462 CC lib/nvme/nvme_io_msg.o 00:05:07.462 CC lib/blob/request.o 00:05:07.462 CC lib/blob/zeroes.o 00:05:07.719 CC lib/blob/blob_bs_dev.o 00:05:07.719 CC lib/accel/accel_rpc.o 00:05:07.719 CC lib/accel/accel_sw.o 00:05:07.719 CC lib/nvme/nvme_poll_group.o 00:05:07.977 CC lib/nvme/nvme_zns.o 00:05:07.977 CC lib/nvme/nvme_stubs.o 00:05:07.977 CC lib/nvme/nvme_auth.o 00:05:07.977 CC lib/nvme/nvme_cuse.o 00:05:07.977 LIB libspdk_accel.a 00:05:07.977 SO libspdk_accel.so.16.0 00:05:08.235 CC lib/init/json_config.o 00:05:08.235 SYMLINK libspdk_accel.so 00:05:08.235 CC lib/init/subsystem.o 00:05:08.235 CC lib/init/subsystem_rpc.o 00:05:08.493 CC lib/init/rpc.o 00:05:08.493 CC lib/nvme/nvme_rdma.o 00:05:08.493 LIB libspdk_init.a 00:05:08.750 SO libspdk_init.so.5.0 00:05:08.750 CC lib/virtio/virtio_vfio_user.o 00:05:08.750 CC lib/virtio/virtio_vhost_user.o 00:05:08.750 CC lib/virtio/virtio.o 00:05:08.750 CC lib/virtio/virtio_pci.o 00:05:08.750 CC lib/bdev/bdev.o 00:05:08.750 SYMLINK libspdk_init.so 00:05:08.750 CC lib/bdev/bdev_rpc.o 00:05:08.750 CC lib/bdev/bdev_zone.o 00:05:08.750 CC lib/bdev/part.o 00:05:09.007 CC lib/event/app.o 00:05:09.007 CC lib/event/reactor.o 00:05:09.007 CC lib/bdev/scsi_nvme.o 00:05:09.007 LIB libspdk_virtio.a 00:05:09.007 SO libspdk_virtio.so.7.0 00:05:09.007 CC lib/event/log_rpc.o 00:05:09.007 CC lib/event/app_rpc.o 00:05:09.007 SYMLINK libspdk_virtio.so 00:05:09.007 CC lib/event/scheduler_static.o 00:05:09.266 LIB libspdk_event.a 00:05:09.524 SO libspdk_event.so.14.0 00:05:09.524 SYMLINK libspdk_event.so 00:05:09.524 LIB libspdk_nvme.a 00:05:09.808 SO libspdk_nvme.so.13.1 00:05:09.808 LIB libspdk_blob.a 00:05:10.066 SO libspdk_blob.so.11.0 00:05:10.066 SYMLINK libspdk_blob.so 00:05:10.066 SYMLINK libspdk_nvme.so 00:05:10.646 CC lib/lvol/lvol.o 00:05:10.646 CC lib/blobfs/blobfs.o 00:05:10.646 CC lib/blobfs/tree.o 00:05:11.211 LIB libspdk_bdev.a 00:05:11.211 SO libspdk_bdev.so.16.0 00:05:11.211 LIB libspdk_blobfs.a 00:05:11.211 SYMLINK libspdk_bdev.so 00:05:11.211 SO libspdk_blobfs.so.10.0 00:05:11.211 LIB libspdk_lvol.a 00:05:11.468 SO libspdk_lvol.so.10.0 00:05:11.468 SYMLINK libspdk_blobfs.so 00:05:11.468 SYMLINK libspdk_lvol.so 00:05:11.468 CC lib/ftl/ftl_init.o 00:05:11.468 CC lib/ftl/ftl_core.o 00:05:11.468 CC lib/scsi/dev.o 00:05:11.468 CC lib/ftl/ftl_debug.o 00:05:11.468 CC lib/ftl/ftl_layout.o 00:05:11.468 CC lib/scsi/lun.o 00:05:11.468 CC lib/scsi/port.o 00:05:11.468 CC lib/nbd/nbd.o 00:05:11.468 CC lib/ublk/ublk.o 00:05:11.468 CC lib/nvmf/ctrlr.o 00:05:11.725 CC lib/scsi/scsi.o 00:05:11.725 CC lib/scsi/scsi_bdev.o 00:05:11.725 CC lib/nvmf/ctrlr_discovery.o 00:05:11.725 CC lib/nvmf/ctrlr_bdev.o 00:05:11.725 CC lib/nvmf/subsystem.o 00:05:11.983 CC lib/ftl/ftl_io.o 00:05:11.983 CC lib/nbd/nbd_rpc.o 00:05:11.983 CC lib/nvmf/nvmf.o 00:05:11.983 CC lib/nvmf/nvmf_rpc.o 00:05:11.983 LIB libspdk_nbd.a 00:05:11.983 SO libspdk_nbd.so.7.0 00:05:11.983 CC lib/ublk/ublk_rpc.o 00:05:12.240 CC lib/ftl/ftl_sb.o 00:05:12.240 SYMLINK libspdk_nbd.so 00:05:12.240 CC lib/scsi/scsi_pr.o 00:05:12.240 CC lib/nvmf/transport.o 00:05:12.240 CC lib/nvmf/tcp.o 00:05:12.240 LIB libspdk_ublk.a 00:05:12.240 SO libspdk_ublk.so.3.0 00:05:12.240 CC lib/ftl/ftl_l2p.o 00:05:12.498 SYMLINK libspdk_ublk.so 00:05:12.498 CC lib/scsi/scsi_rpc.o 00:05:12.498 CC lib/nvmf/stubs.o 00:05:12.498 CC lib/nvmf/mdns_server.o 00:05:12.498 CC lib/ftl/ftl_l2p_flat.o 00:05:12.498 CC lib/scsi/task.o 00:05:12.757 CC lib/ftl/ftl_nv_cache.o 00:05:12.757 LIB libspdk_scsi.a 00:05:12.757 CC lib/nvmf/rdma.o 00:05:12.757 SO libspdk_scsi.so.9.0 00:05:12.757 CC lib/nvmf/auth.o 00:05:12.757 CC lib/ftl/ftl_band.o 00:05:13.015 CC lib/ftl/ftl_band_ops.o 00:05:13.015 CC lib/ftl/ftl_writer.o 00:05:13.015 SYMLINK libspdk_scsi.so 00:05:13.015 CC lib/ftl/ftl_rq.o 00:05:13.015 CC lib/iscsi/conn.o 00:05:13.015 CC lib/ftl/ftl_reloc.o 00:05:13.015 CC lib/ftl/ftl_l2p_cache.o 00:05:13.015 CC lib/iscsi/init_grp.o 00:05:13.274 CC lib/ftl/ftl_p2l.o 00:05:13.274 CC lib/iscsi/iscsi.o 00:05:13.274 CC lib/iscsi/md5.o 00:05:13.533 CC lib/ftl/mngt/ftl_mngt.o 00:05:13.533 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:13.533 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:13.533 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:13.533 CC lib/iscsi/param.o 00:05:13.533 CC lib/iscsi/portal_grp.o 00:05:13.533 CC lib/iscsi/tgt_node.o 00:05:13.792 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:13.792 CC lib/vhost/vhost.o 00:05:13.792 CC lib/iscsi/iscsi_subsystem.o 00:05:13.792 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:13.792 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:13.792 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:13.793 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:14.050 CC lib/vhost/vhost_rpc.o 00:05:14.050 CC lib/vhost/vhost_scsi.o 00:05:14.050 CC lib/iscsi/iscsi_rpc.o 00:05:14.050 CC lib/iscsi/task.o 00:05:14.050 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:14.050 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:14.308 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:14.308 CC lib/vhost/vhost_blk.o 00:05:14.308 CC lib/vhost/rte_vhost_user.o 00:05:14.308 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:14.308 CC lib/ftl/utils/ftl_conf.o 00:05:14.308 CC lib/ftl/utils/ftl_md.o 00:05:14.566 LIB libspdk_iscsi.a 00:05:14.566 CC lib/ftl/utils/ftl_mempool.o 00:05:14.566 CC lib/ftl/utils/ftl_bitmap.o 00:05:14.566 CC lib/ftl/utils/ftl_property.o 00:05:14.566 SO libspdk_iscsi.so.8.0 00:05:14.566 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:14.826 LIB libspdk_nvmf.a 00:05:14.826 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:14.826 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:14.826 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:14.826 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:14.826 SYMLINK libspdk_iscsi.so 00:05:14.826 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:14.826 SO libspdk_nvmf.so.19.0 00:05:14.826 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:14.826 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:15.084 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:15.084 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:15.084 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:15.084 CC lib/ftl/base/ftl_base_dev.o 00:05:15.084 SYMLINK libspdk_nvmf.so 00:05:15.085 CC lib/ftl/base/ftl_base_bdev.o 00:05:15.085 CC lib/ftl/ftl_trace.o 00:05:15.085 LIB libspdk_vhost.a 00:05:15.344 SO libspdk_vhost.so.8.0 00:05:15.344 LIB libspdk_ftl.a 00:05:15.344 SYMLINK libspdk_vhost.so 00:05:15.602 SO libspdk_ftl.so.9.0 00:05:15.861 SYMLINK libspdk_ftl.so 00:05:16.428 CC module/env_dpdk/env_dpdk_rpc.o 00:05:16.428 CC module/scheduler/gscheduler/gscheduler.o 00:05:16.428 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:16.428 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:16.428 CC module/blob/bdev/blob_bdev.o 00:05:16.428 CC module/sock/posix/posix.o 00:05:16.428 CC module/accel/error/accel_error.o 00:05:16.428 CC module/keyring/file/keyring.o 00:05:16.428 CC module/accel/ioat/accel_ioat.o 00:05:16.428 CC module/keyring/linux/keyring.o 00:05:16.428 LIB libspdk_env_dpdk_rpc.a 00:05:16.428 SO libspdk_env_dpdk_rpc.so.6.0 00:05:16.686 SYMLINK libspdk_env_dpdk_rpc.so 00:05:16.686 CC module/accel/ioat/accel_ioat_rpc.o 00:05:16.686 LIB libspdk_scheduler_gscheduler.a 00:05:16.686 CC module/keyring/file/keyring_rpc.o 00:05:16.686 CC module/keyring/linux/keyring_rpc.o 00:05:16.686 LIB libspdk_scheduler_dpdk_governor.a 00:05:16.686 SO libspdk_scheduler_gscheduler.so.4.0 00:05:16.686 CC module/accel/error/accel_error_rpc.o 00:05:16.686 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:16.686 LIB libspdk_scheduler_dynamic.a 00:05:16.686 SO libspdk_scheduler_dynamic.so.4.0 00:05:16.686 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:16.686 SYMLINK libspdk_scheduler_gscheduler.so 00:05:16.686 LIB libspdk_blob_bdev.a 00:05:16.686 LIB libspdk_accel_ioat.a 00:05:16.686 SYMLINK libspdk_scheduler_dynamic.so 00:05:16.686 LIB libspdk_keyring_linux.a 00:05:16.686 LIB libspdk_keyring_file.a 00:05:16.686 SO libspdk_blob_bdev.so.11.0 00:05:16.686 SO libspdk_accel_ioat.so.6.0 00:05:16.686 SO libspdk_keyring_linux.so.1.0 00:05:16.686 SO libspdk_keyring_file.so.1.0 00:05:16.686 LIB libspdk_accel_error.a 00:05:16.686 SYMLINK libspdk_blob_bdev.so 00:05:16.686 SO libspdk_accel_error.so.2.0 00:05:16.686 SYMLINK libspdk_accel_ioat.so 00:05:16.686 SYMLINK libspdk_keyring_linux.so 00:05:16.686 SYMLINK libspdk_keyring_file.so 00:05:16.946 CC module/accel/dsa/accel_dsa.o 00:05:16.946 CC module/accel/dsa/accel_dsa_rpc.o 00:05:16.946 SYMLINK libspdk_accel_error.so 00:05:16.946 CC module/accel/iaa/accel_iaa.o 00:05:16.946 CC module/accel/iaa/accel_iaa_rpc.o 00:05:16.946 LIB libspdk_accel_iaa.a 00:05:16.946 LIB libspdk_accel_dsa.a 00:05:16.946 CC module/blobfs/bdev/blobfs_bdev.o 00:05:16.946 CC module/bdev/lvol/vbdev_lvol.o 00:05:16.946 CC module/bdev/delay/vbdev_delay.o 00:05:17.205 SO libspdk_accel_iaa.so.3.0 00:05:17.205 CC module/bdev/gpt/gpt.o 00:05:17.205 CC module/bdev/error/vbdev_error.o 00:05:17.205 LIB libspdk_sock_posix.a 00:05:17.205 SO libspdk_accel_dsa.so.5.0 00:05:17.205 SO libspdk_sock_posix.so.6.0 00:05:17.205 CC module/bdev/malloc/bdev_malloc.o 00:05:17.205 CC module/bdev/null/bdev_null.o 00:05:17.205 SYMLINK libspdk_accel_dsa.so 00:05:17.205 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:17.205 SYMLINK libspdk_accel_iaa.so 00:05:17.205 CC module/bdev/null/bdev_null_rpc.o 00:05:17.205 SYMLINK libspdk_sock_posix.so 00:05:17.205 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:17.205 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:17.205 CC module/bdev/gpt/vbdev_gpt.o 00:05:17.463 CC module/bdev/error/vbdev_error_rpc.o 00:05:17.463 LIB libspdk_bdev_null.a 00:05:17.463 SO libspdk_bdev_null.so.6.0 00:05:17.463 LIB libspdk_bdev_delay.a 00:05:17.463 LIB libspdk_blobfs_bdev.a 00:05:17.463 SO libspdk_bdev_delay.so.6.0 00:05:17.463 LIB libspdk_bdev_malloc.a 00:05:17.463 SO libspdk_blobfs_bdev.so.6.0 00:05:17.463 CC module/bdev/nvme/bdev_nvme.o 00:05:17.463 SYMLINK libspdk_bdev_null.so 00:05:17.463 SO libspdk_bdev_malloc.so.6.0 00:05:17.463 CC module/bdev/passthru/vbdev_passthru.o 00:05:17.463 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:17.463 SYMLINK libspdk_bdev_delay.so 00:05:17.463 LIB libspdk_bdev_error.a 00:05:17.463 LIB libspdk_bdev_gpt.a 00:05:17.463 CC module/bdev/raid/bdev_raid.o 00:05:17.463 SYMLINK libspdk_blobfs_bdev.so 00:05:17.722 SYMLINK libspdk_bdev_malloc.so 00:05:17.722 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:17.722 SO libspdk_bdev_error.so.6.0 00:05:17.722 SO libspdk_bdev_gpt.so.6.0 00:05:17.722 SYMLINK libspdk_bdev_gpt.so 00:05:17.722 SYMLINK libspdk_bdev_error.so 00:05:17.722 CC module/bdev/raid/bdev_raid_rpc.o 00:05:17.722 CC module/bdev/raid/bdev_raid_sb.o 00:05:17.722 CC module/bdev/split/vbdev_split.o 00:05:17.722 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:17.722 CC module/bdev/aio/bdev_aio.o 00:05:17.722 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:17.981 LIB libspdk_bdev_lvol.a 00:05:17.981 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:17.981 CC module/bdev/split/vbdev_split_rpc.o 00:05:17.981 SO libspdk_bdev_lvol.so.6.0 00:05:17.981 CC module/bdev/nvme/nvme_rpc.o 00:05:17.981 LIB libspdk_bdev_passthru.a 00:05:17.981 SYMLINK libspdk_bdev_lvol.so 00:05:17.981 CC module/bdev/nvme/bdev_mdns_client.o 00:05:17.981 SO libspdk_bdev_passthru.so.6.0 00:05:17.981 CC module/bdev/nvme/vbdev_opal.o 00:05:17.981 LIB libspdk_bdev_zone_block.a 00:05:17.981 LIB libspdk_bdev_split.a 00:05:17.981 SYMLINK libspdk_bdev_passthru.so 00:05:17.981 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:17.981 SO libspdk_bdev_zone_block.so.6.0 00:05:17.981 CC module/bdev/aio/bdev_aio_rpc.o 00:05:17.981 SO libspdk_bdev_split.so.6.0 00:05:18.239 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:18.239 CC module/bdev/raid/raid0.o 00:05:18.239 SYMLINK libspdk_bdev_zone_block.so 00:05:18.239 SYMLINK libspdk_bdev_split.so 00:05:18.239 CC module/bdev/raid/raid1.o 00:05:18.239 CC module/bdev/raid/concat.o 00:05:18.239 LIB libspdk_bdev_aio.a 00:05:18.239 SO libspdk_bdev_aio.so.6.0 00:05:18.239 CC module/bdev/ftl/bdev_ftl.o 00:05:18.239 SYMLINK libspdk_bdev_aio.so 00:05:18.239 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:18.498 CC module/bdev/iscsi/bdev_iscsi.o 00:05:18.498 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:18.498 LIB libspdk_bdev_raid.a 00:05:18.498 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:18.498 SO libspdk_bdev_raid.so.6.0 00:05:18.498 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:18.498 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:18.498 CC module/bdev/rbd/bdev_rbd.o 00:05:18.498 CC module/bdev/rbd/bdev_rbd_rpc.o 00:05:18.757 LIB libspdk_bdev_ftl.a 00:05:18.757 SYMLINK libspdk_bdev_raid.so 00:05:18.757 SO libspdk_bdev_ftl.so.6.0 00:05:18.757 SYMLINK libspdk_bdev_ftl.so 00:05:18.757 LIB libspdk_bdev_iscsi.a 00:05:19.015 SO libspdk_bdev_iscsi.so.6.0 00:05:19.015 SYMLINK libspdk_bdev_iscsi.so 00:05:19.015 LIB libspdk_bdev_rbd.a 00:05:19.015 LIB libspdk_bdev_virtio.a 00:05:19.015 SO libspdk_bdev_rbd.so.7.0 00:05:19.015 SO libspdk_bdev_virtio.so.6.0 00:05:19.274 SYMLINK libspdk_bdev_rbd.so 00:05:19.274 SYMLINK libspdk_bdev_virtio.so 00:05:19.533 LIB libspdk_bdev_nvme.a 00:05:19.533 SO libspdk_bdev_nvme.so.7.0 00:05:19.792 SYMLINK libspdk_bdev_nvme.so 00:05:20.358 CC module/event/subsystems/keyring/keyring.o 00:05:20.358 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:20.358 CC module/event/subsystems/vmd/vmd.o 00:05:20.358 CC module/event/subsystems/scheduler/scheduler.o 00:05:20.358 CC module/event/subsystems/sock/sock.o 00:05:20.358 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:20.358 CC module/event/subsystems/iobuf/iobuf.o 00:05:20.358 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:20.358 LIB libspdk_event_keyring.a 00:05:20.358 LIB libspdk_event_scheduler.a 00:05:20.358 LIB libspdk_event_sock.a 00:05:20.358 LIB libspdk_event_vhost_blk.a 00:05:20.617 SO libspdk_event_keyring.so.1.0 00:05:20.617 LIB libspdk_event_vmd.a 00:05:20.617 LIB libspdk_event_iobuf.a 00:05:20.617 SO libspdk_event_scheduler.so.4.0 00:05:20.617 SO libspdk_event_sock.so.5.0 00:05:20.617 SO libspdk_event_vhost_blk.so.3.0 00:05:20.617 SO libspdk_event_vmd.so.6.0 00:05:20.617 SO libspdk_event_iobuf.so.3.0 00:05:20.617 SYMLINK libspdk_event_keyring.so 00:05:20.617 SYMLINK libspdk_event_sock.so 00:05:20.617 SYMLINK libspdk_event_scheduler.so 00:05:20.617 SYMLINK libspdk_event_vhost_blk.so 00:05:20.617 SYMLINK libspdk_event_vmd.so 00:05:20.617 SYMLINK libspdk_event_iobuf.so 00:05:20.875 CC module/event/subsystems/accel/accel.o 00:05:21.142 LIB libspdk_event_accel.a 00:05:21.142 SO libspdk_event_accel.so.6.0 00:05:21.142 SYMLINK libspdk_event_accel.so 00:05:21.707 CC module/event/subsystems/bdev/bdev.o 00:05:21.707 LIB libspdk_event_bdev.a 00:05:21.965 SO libspdk_event_bdev.so.6.0 00:05:21.965 SYMLINK libspdk_event_bdev.so 00:05:22.223 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:22.223 CC module/event/subsystems/scsi/scsi.o 00:05:22.223 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:22.223 CC module/event/subsystems/ublk/ublk.o 00:05:22.223 CC module/event/subsystems/nbd/nbd.o 00:05:22.481 LIB libspdk_event_nbd.a 00:05:22.481 LIB libspdk_event_ublk.a 00:05:22.481 LIB libspdk_event_scsi.a 00:05:22.481 SO libspdk_event_ublk.so.3.0 00:05:22.481 SO libspdk_event_nbd.so.6.0 00:05:22.481 SO libspdk_event_scsi.so.6.0 00:05:22.481 LIB libspdk_event_nvmf.a 00:05:22.481 SYMLINK libspdk_event_nbd.so 00:05:22.481 SYMLINK libspdk_event_ublk.so 00:05:22.481 SO libspdk_event_nvmf.so.6.0 00:05:22.481 SYMLINK libspdk_event_scsi.so 00:05:22.740 SYMLINK libspdk_event_nvmf.so 00:05:22.998 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:22.998 CC module/event/subsystems/iscsi/iscsi.o 00:05:22.998 LIB libspdk_event_vhost_scsi.a 00:05:22.998 LIB libspdk_event_iscsi.a 00:05:23.256 SO libspdk_event_vhost_scsi.so.3.0 00:05:23.256 SO libspdk_event_iscsi.so.6.0 00:05:23.256 SYMLINK libspdk_event_vhost_scsi.so 00:05:23.256 SYMLINK libspdk_event_iscsi.so 00:05:23.514 SO libspdk.so.6.0 00:05:23.514 SYMLINK libspdk.so 00:05:23.777 CXX app/trace/trace.o 00:05:23.777 CC app/trace_record/trace_record.o 00:05:23.777 CC app/spdk_lspci/spdk_lspci.o 00:05:23.777 CC app/spdk_nvme_perf/perf.o 00:05:23.777 CC app/iscsi_tgt/iscsi_tgt.o 00:05:23.777 CC app/nvmf_tgt/nvmf_main.o 00:05:23.777 CC app/spdk_tgt/spdk_tgt.o 00:05:23.777 CC examples/util/zipf/zipf.o 00:05:23.777 CC test/thread/poller_perf/poller_perf.o 00:05:23.777 CC test/dma/test_dma/test_dma.o 00:05:24.047 LINK spdk_lspci 00:05:24.047 LINK spdk_trace_record 00:05:24.047 LINK iscsi_tgt 00:05:24.047 LINK nvmf_tgt 00:05:24.047 LINK zipf 00:05:24.047 LINK poller_perf 00:05:24.047 LINK spdk_tgt 00:05:24.047 LINK spdk_trace 00:05:24.305 CC app/spdk_nvme_identify/identify.o 00:05:24.305 CC app/spdk_nvme_discover/discovery_aer.o 00:05:24.306 LINK test_dma 00:05:24.306 CC examples/ioat/perf/perf.o 00:05:24.306 TEST_HEADER include/spdk/accel.h 00:05:24.306 TEST_HEADER include/spdk/accel_module.h 00:05:24.306 TEST_HEADER include/spdk/assert.h 00:05:24.306 TEST_HEADER include/spdk/barrier.h 00:05:24.306 TEST_HEADER include/spdk/base64.h 00:05:24.306 TEST_HEADER include/spdk/bdev.h 00:05:24.306 TEST_HEADER include/spdk/bdev_module.h 00:05:24.306 TEST_HEADER include/spdk/bdev_zone.h 00:05:24.306 TEST_HEADER include/spdk/bit_array.h 00:05:24.306 TEST_HEADER include/spdk/bit_pool.h 00:05:24.306 TEST_HEADER include/spdk/blob_bdev.h 00:05:24.306 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:24.306 TEST_HEADER include/spdk/blobfs.h 00:05:24.306 TEST_HEADER include/spdk/blob.h 00:05:24.306 TEST_HEADER include/spdk/conf.h 00:05:24.306 TEST_HEADER include/spdk/config.h 00:05:24.306 TEST_HEADER include/spdk/cpuset.h 00:05:24.306 TEST_HEADER include/spdk/crc16.h 00:05:24.306 TEST_HEADER include/spdk/crc32.h 00:05:24.306 CC test/app/bdev_svc/bdev_svc.o 00:05:24.306 TEST_HEADER include/spdk/crc64.h 00:05:24.306 TEST_HEADER include/spdk/dif.h 00:05:24.306 TEST_HEADER include/spdk/dma.h 00:05:24.306 TEST_HEADER include/spdk/endian.h 00:05:24.306 TEST_HEADER include/spdk/env_dpdk.h 00:05:24.306 TEST_HEADER include/spdk/env.h 00:05:24.306 TEST_HEADER include/spdk/event.h 00:05:24.306 TEST_HEADER include/spdk/fd_group.h 00:05:24.306 TEST_HEADER include/spdk/fd.h 00:05:24.306 TEST_HEADER include/spdk/file.h 00:05:24.306 TEST_HEADER include/spdk/ftl.h 00:05:24.306 TEST_HEADER include/spdk/gpt_spec.h 00:05:24.306 TEST_HEADER include/spdk/hexlify.h 00:05:24.306 TEST_HEADER include/spdk/histogram_data.h 00:05:24.306 TEST_HEADER include/spdk/idxd.h 00:05:24.306 TEST_HEADER include/spdk/idxd_spec.h 00:05:24.564 TEST_HEADER include/spdk/init.h 00:05:24.564 CC examples/ioat/verify/verify.o 00:05:24.564 TEST_HEADER include/spdk/ioat.h 00:05:24.564 TEST_HEADER include/spdk/ioat_spec.h 00:05:24.564 TEST_HEADER include/spdk/iscsi_spec.h 00:05:24.564 LINK spdk_nvme_discover 00:05:24.564 TEST_HEADER include/spdk/json.h 00:05:24.564 TEST_HEADER include/spdk/jsonrpc.h 00:05:24.564 TEST_HEADER include/spdk/keyring.h 00:05:24.564 TEST_HEADER include/spdk/keyring_module.h 00:05:24.564 TEST_HEADER include/spdk/likely.h 00:05:24.564 TEST_HEADER include/spdk/log.h 00:05:24.564 TEST_HEADER include/spdk/lvol.h 00:05:24.564 TEST_HEADER include/spdk/memory.h 00:05:24.564 TEST_HEADER include/spdk/mmio.h 00:05:24.564 TEST_HEADER include/spdk/nbd.h 00:05:24.564 TEST_HEADER include/spdk/net.h 00:05:24.564 TEST_HEADER include/spdk/notify.h 00:05:24.564 CC test/event/event_perf/event_perf.o 00:05:24.564 TEST_HEADER include/spdk/nvme.h 00:05:24.564 TEST_HEADER include/spdk/nvme_intel.h 00:05:24.564 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:24.564 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:24.564 TEST_HEADER include/spdk/nvme_spec.h 00:05:24.564 TEST_HEADER include/spdk/nvme_zns.h 00:05:24.564 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:24.564 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:24.564 TEST_HEADER include/spdk/nvmf.h 00:05:24.564 TEST_HEADER include/spdk/nvmf_spec.h 00:05:24.564 LINK spdk_nvme_perf 00:05:24.564 TEST_HEADER include/spdk/nvmf_transport.h 00:05:24.564 TEST_HEADER include/spdk/opal.h 00:05:24.564 TEST_HEADER include/spdk/opal_spec.h 00:05:24.564 TEST_HEADER include/spdk/pci_ids.h 00:05:24.564 TEST_HEADER include/spdk/pipe.h 00:05:24.564 TEST_HEADER include/spdk/queue.h 00:05:24.564 TEST_HEADER include/spdk/reduce.h 00:05:24.564 TEST_HEADER include/spdk/rpc.h 00:05:24.564 TEST_HEADER include/spdk/scheduler.h 00:05:24.564 TEST_HEADER include/spdk/scsi.h 00:05:24.564 CC test/env/mem_callbacks/mem_callbacks.o 00:05:24.564 TEST_HEADER include/spdk/scsi_spec.h 00:05:24.564 TEST_HEADER include/spdk/sock.h 00:05:24.564 TEST_HEADER include/spdk/stdinc.h 00:05:24.564 TEST_HEADER include/spdk/string.h 00:05:24.564 TEST_HEADER include/spdk/thread.h 00:05:24.564 TEST_HEADER include/spdk/trace.h 00:05:24.564 TEST_HEADER include/spdk/trace_parser.h 00:05:24.564 TEST_HEADER include/spdk/tree.h 00:05:24.564 TEST_HEADER include/spdk/ublk.h 00:05:24.564 TEST_HEADER include/spdk/util.h 00:05:24.564 TEST_HEADER include/spdk/uuid.h 00:05:24.564 TEST_HEADER include/spdk/version.h 00:05:24.564 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:24.564 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:24.564 TEST_HEADER include/spdk/vhost.h 00:05:24.564 TEST_HEADER include/spdk/vmd.h 00:05:24.564 TEST_HEADER include/spdk/xor.h 00:05:24.564 LINK ioat_perf 00:05:24.564 TEST_HEADER include/spdk/zipf.h 00:05:24.564 CXX test/cpp_headers/accel.o 00:05:24.564 LINK bdev_svc 00:05:24.564 LINK event_perf 00:05:24.564 CC test/env/vtophys/vtophys.o 00:05:24.564 LINK verify 00:05:24.822 CXX test/cpp_headers/accel_module.o 00:05:24.822 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:24.822 LINK vtophys 00:05:24.822 CC test/env/memory/memory_ut.o 00:05:24.822 CC test/env/pci/pci_ut.o 00:05:24.822 CXX test/cpp_headers/assert.o 00:05:24.822 CC test/event/reactor/reactor.o 00:05:24.822 LINK env_dpdk_post_init 00:05:25.080 LINK spdk_nvme_identify 00:05:25.080 CXX test/cpp_headers/barrier.o 00:05:25.080 LINK mem_callbacks 00:05:25.080 LINK reactor 00:05:25.080 CC examples/vmd/lsvmd/lsvmd.o 00:05:25.080 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:25.080 CC examples/idxd/perf/perf.o 00:05:25.080 LINK pci_ut 00:05:25.080 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:25.080 LINK lsvmd 00:05:25.338 CXX test/cpp_headers/base64.o 00:05:25.338 CC app/spdk_top/spdk_top.o 00:05:25.338 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:25.338 CC test/event/reactor_perf/reactor_perf.o 00:05:25.338 CXX test/cpp_headers/bdev.o 00:05:25.338 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:25.338 LINK reactor_perf 00:05:25.338 CXX test/cpp_headers/bdev_module.o 00:05:25.338 LINK idxd_perf 00:05:25.595 LINK nvme_fuzz 00:05:25.595 CC examples/vmd/led/led.o 00:05:25.595 CXX test/cpp_headers/bdev_zone.o 00:05:25.595 LINK led 00:05:25.595 CC test/event/app_repeat/app_repeat.o 00:05:25.853 CC app/vhost/vhost.o 00:05:25.853 CC test/rpc_client/rpc_client_test.o 00:05:25.853 LINK vhost_fuzz 00:05:25.853 LINK memory_ut 00:05:25.853 CXX test/cpp_headers/bit_array.o 00:05:25.853 LINK app_repeat 00:05:25.853 CC test/accel/dif/dif.o 00:05:25.853 LINK rpc_client_test 00:05:25.853 LINK vhost 00:05:25.853 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:25.853 CXX test/cpp_headers/bit_pool.o 00:05:26.110 CXX test/cpp_headers/blob_bdev.o 00:05:26.110 LINK spdk_top 00:05:26.110 CC app/spdk_dd/spdk_dd.o 00:05:26.110 CXX test/cpp_headers/blobfs_bdev.o 00:05:26.110 LINK interrupt_tgt 00:05:26.110 CXX test/cpp_headers/blobfs.o 00:05:26.110 CC test/event/scheduler/scheduler.o 00:05:26.110 CC test/app/histogram_perf/histogram_perf.o 00:05:26.367 CXX test/cpp_headers/blob.o 00:05:26.367 CC test/app/jsoncat/jsoncat.o 00:05:26.367 CXX test/cpp_headers/conf.o 00:05:26.367 LINK histogram_perf 00:05:26.367 LINK dif 00:05:26.367 LINK spdk_dd 00:05:26.367 LINK scheduler 00:05:26.367 CC test/blobfs/mkfs/mkfs.o 00:05:26.367 LINK jsoncat 00:05:26.625 CXX test/cpp_headers/config.o 00:05:26.625 CXX test/cpp_headers/cpuset.o 00:05:26.625 CC examples/thread/thread/thread_ex.o 00:05:26.625 CC test/app/stub/stub.o 00:05:26.625 CXX test/cpp_headers/crc16.o 00:05:26.625 LINK mkfs 00:05:26.625 LINK iscsi_fuzz 00:05:26.625 CXX test/cpp_headers/crc32.o 00:05:26.883 LINK stub 00:05:26.883 CC app/fio/nvme/fio_plugin.o 00:05:26.883 LINK thread 00:05:26.883 CC test/nvme/aer/aer.o 00:05:26.883 CXX test/cpp_headers/crc64.o 00:05:26.883 CC test/lvol/esnap/esnap.o 00:05:26.883 CXX test/cpp_headers/dif.o 00:05:26.883 CC test/bdev/bdevio/bdevio.o 00:05:27.140 CXX test/cpp_headers/dma.o 00:05:27.140 CXX test/cpp_headers/endian.o 00:05:27.140 CC examples/sock/hello_world/hello_sock.o 00:05:27.140 CXX test/cpp_headers/env_dpdk.o 00:05:27.140 LINK aer 00:05:27.140 CXX test/cpp_headers/env.o 00:05:27.397 CC test/nvme/reset/reset.o 00:05:27.397 CC app/fio/bdev/fio_plugin.o 00:05:27.397 LINK spdk_nvme 00:05:27.397 CC test/nvme/sgl/sgl.o 00:05:27.397 LINK hello_sock 00:05:27.397 CXX test/cpp_headers/event.o 00:05:27.397 CC test/nvme/e2edp/nvme_dp.o 00:05:27.397 CXX test/cpp_headers/fd_group.o 00:05:27.397 LINK bdevio 00:05:27.397 CC test/nvme/overhead/overhead.o 00:05:27.655 LINK reset 00:05:27.655 CXX test/cpp_headers/fd.o 00:05:27.655 LINK sgl 00:05:27.655 CC test/nvme/err_injection/err_injection.o 00:05:27.655 LINK nvme_dp 00:05:27.655 CXX test/cpp_headers/file.o 00:05:27.655 LINK overhead 00:05:27.655 CC examples/accel/perf/accel_perf.o 00:05:27.912 CC test/nvme/startup/startup.o 00:05:27.913 LINK spdk_bdev 00:05:27.913 LINK err_injection 00:05:27.913 CC test/nvme/reserve/reserve.o 00:05:27.913 CC test/nvme/simple_copy/simple_copy.o 00:05:27.913 CC test/nvme/connect_stress/connect_stress.o 00:05:27.913 LINK startup 00:05:27.913 CXX test/cpp_headers/ftl.o 00:05:28.171 CC test/nvme/boot_partition/boot_partition.o 00:05:28.171 LINK reserve 00:05:28.171 CC test/nvme/compliance/nvme_compliance.o 00:05:28.171 LINK simple_copy 00:05:28.171 LINK connect_stress 00:05:28.171 CC test/nvme/fused_ordering/fused_ordering.o 00:05:28.171 LINK accel_perf 00:05:28.171 CXX test/cpp_headers/gpt_spec.o 00:05:28.171 LINK boot_partition 00:05:28.430 CXX test/cpp_headers/hexlify.o 00:05:28.430 LINK fused_ordering 00:05:28.430 CXX test/cpp_headers/histogram_data.o 00:05:28.430 LINK nvme_compliance 00:05:28.430 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:28.430 CC test/nvme/fdp/fdp.o 00:05:28.430 CC test/nvme/cuse/cuse.o 00:05:28.430 CXX test/cpp_headers/idxd.o 00:05:28.430 CXX test/cpp_headers/idxd_spec.o 00:05:28.689 CXX test/cpp_headers/init.o 00:05:28.689 CXX test/cpp_headers/ioat.o 00:05:28.689 CXX test/cpp_headers/ioat_spec.o 00:05:28.689 LINK doorbell_aers 00:05:28.689 CC examples/blob/hello_world/hello_blob.o 00:05:28.689 CXX test/cpp_headers/iscsi_spec.o 00:05:28.689 CXX test/cpp_headers/json.o 00:05:28.689 LINK fdp 00:05:28.689 CXX test/cpp_headers/jsonrpc.o 00:05:28.689 CXX test/cpp_headers/keyring.o 00:05:28.689 CC examples/blob/cli/blobcli.o 00:05:28.689 CXX test/cpp_headers/keyring_module.o 00:05:28.947 LINK hello_blob 00:05:28.947 CXX test/cpp_headers/likely.o 00:05:28.947 CXX test/cpp_headers/log.o 00:05:28.947 CXX test/cpp_headers/lvol.o 00:05:28.947 CXX test/cpp_headers/memory.o 00:05:29.206 CC examples/nvme/hello_world/hello_world.o 00:05:29.206 CC examples/nvme/reconnect/reconnect.o 00:05:29.206 CXX test/cpp_headers/mmio.o 00:05:29.206 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:29.206 CXX test/cpp_headers/nbd.o 00:05:29.206 CXX test/cpp_headers/net.o 00:05:29.206 LINK blobcli 00:05:29.206 CC examples/bdev/hello_world/hello_bdev.o 00:05:29.206 CC examples/bdev/bdevperf/bdevperf.o 00:05:29.464 LINK hello_world 00:05:29.464 CXX test/cpp_headers/notify.o 00:05:29.464 CC examples/nvme/arbitration/arbitration.o 00:05:29.464 LINK hello_bdev 00:05:29.722 CXX test/cpp_headers/nvme.o 00:05:29.722 LINK reconnect 00:05:29.722 CC examples/nvme/hotplug/hotplug.o 00:05:29.722 LINK cuse 00:05:29.722 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:29.722 LINK nvme_manage 00:05:29.722 LINK arbitration 00:05:29.722 CXX test/cpp_headers/nvme_intel.o 00:05:29.979 CXX test/cpp_headers/nvme_ocssd.o 00:05:29.979 CC examples/nvme/abort/abort.o 00:05:29.979 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:29.979 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:29.979 LINK cmb_copy 00:05:29.979 CXX test/cpp_headers/nvme_spec.o 00:05:29.979 LINK hotplug 00:05:29.979 CXX test/cpp_headers/nvme_zns.o 00:05:29.979 LINK bdevperf 00:05:30.238 CXX test/cpp_headers/nvmf_cmd.o 00:05:30.238 LINK pmr_persistence 00:05:30.238 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:30.238 CXX test/cpp_headers/nvmf.o 00:05:30.238 CXX test/cpp_headers/nvmf_spec.o 00:05:30.238 CXX test/cpp_headers/nvmf_transport.o 00:05:30.238 CXX test/cpp_headers/opal.o 00:05:30.238 LINK abort 00:05:30.238 CXX test/cpp_headers/opal_spec.o 00:05:30.238 CXX test/cpp_headers/pci_ids.o 00:05:30.238 CXX test/cpp_headers/pipe.o 00:05:30.238 CXX test/cpp_headers/queue.o 00:05:30.496 CXX test/cpp_headers/reduce.o 00:05:30.496 CXX test/cpp_headers/rpc.o 00:05:30.496 CXX test/cpp_headers/scheduler.o 00:05:30.496 CXX test/cpp_headers/scsi.o 00:05:30.496 CXX test/cpp_headers/scsi_spec.o 00:05:30.496 CXX test/cpp_headers/sock.o 00:05:30.496 CXX test/cpp_headers/stdinc.o 00:05:30.496 CXX test/cpp_headers/string.o 00:05:30.496 CXX test/cpp_headers/thread.o 00:05:30.496 CXX test/cpp_headers/trace.o 00:05:30.496 CXX test/cpp_headers/trace_parser.o 00:05:30.496 CXX test/cpp_headers/tree.o 00:05:30.754 CXX test/cpp_headers/ublk.o 00:05:30.754 CXX test/cpp_headers/uuid.o 00:05:30.754 CXX test/cpp_headers/util.o 00:05:30.754 CXX test/cpp_headers/version.o 00:05:30.754 CXX test/cpp_headers/vfio_user_pci.o 00:05:30.754 CXX test/cpp_headers/vfio_user_spec.o 00:05:30.754 CXX test/cpp_headers/vhost.o 00:05:30.754 CXX test/cpp_headers/vmd.o 00:05:30.754 CXX test/cpp_headers/xor.o 00:05:30.754 CXX test/cpp_headers/zipf.o 00:05:31.024 CC examples/nvmf/nvmf/nvmf.o 00:05:31.286 LINK nvmf 00:05:31.901 LINK esnap 00:05:32.466 00:05:32.466 real 1m1.952s 00:05:32.466 user 5m34.278s 00:05:32.466 sys 1m44.217s 00:05:32.466 16:56:24 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:32.466 16:56:24 make -- common/autotest_common.sh@10 -- $ set +x 00:05:32.466 ************************************ 00:05:32.466 END TEST make 00:05:32.466 ************************************ 00:05:32.466 16:56:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:32.466 16:56:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:32.466 16:56:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:32.466 16:56:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:32.466 16:56:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:32.466 16:56:24 -- pm/common@44 -- $ pid=5144 00:05:32.466 16:56:24 -- pm/common@50 -- $ kill -TERM 5144 00:05:32.466 16:56:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:32.466 16:56:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:32.466 16:56:24 -- pm/common@44 -- $ pid=5146 00:05:32.466 16:56:24 -- pm/common@50 -- $ kill -TERM 5146 00:05:32.466 16:56:24 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:32.466 16:56:24 -- nvmf/common.sh@7 -- # uname -s 00:05:32.466 16:56:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.466 16:56:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.466 16:56:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.466 16:56:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.466 16:56:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.466 16:56:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.466 16:56:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.466 16:56:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.466 16:56:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.466 16:56:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.466 16:56:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4f3ec45a-584a-4a72-a1b0-e42cc578c863 00:05:32.466 16:56:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=4f3ec45a-584a-4a72-a1b0-e42cc578c863 00:05:32.466 16:56:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.466 16:56:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.466 16:56:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:32.466 16:56:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:32.466 16:56:24 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:32.466 16:56:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.466 16:56:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.466 16:56:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.466 16:56:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.466 16:56:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.466 16:56:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.466 16:56:24 -- paths/export.sh@5 -- # export PATH 00:05:32.466 16:56:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.466 16:56:24 -- nvmf/common.sh@47 -- # : 0 00:05:32.466 16:56:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:32.466 16:56:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:32.466 16:56:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:32.466 16:56:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.466 16:56:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.466 16:56:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:32.466 16:56:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:32.466 16:56:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:32.466 16:56:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:32.466 16:56:24 -- spdk/autotest.sh@32 -- # uname -s 00:05:32.724 16:56:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:32.724 16:56:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:32.724 16:56:24 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:32.724 16:56:24 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:32.724 16:56:24 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:32.724 16:56:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:32.724 16:56:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:32.724 16:56:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:32.724 16:56:24 -- spdk/autotest.sh@48 -- # udevadm_pid=52712 00:05:32.724 16:56:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:32.724 16:56:24 -- pm/common@17 -- # local monitor 00:05:32.724 16:56:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:32.724 16:56:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:32.724 16:56:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:32.724 16:56:24 -- pm/common@25 -- # sleep 1 00:05:32.724 16:56:24 -- pm/common@21 -- # date +%s 00:05:32.724 16:56:24 -- pm/common@21 -- # date +%s 00:05:32.724 16:56:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721926584 00:05:32.724 16:56:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721926584 00:05:32.724 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721926584_collect-vmstat.pm.log 00:05:32.724 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721926584_collect-cpu-load.pm.log 00:05:33.658 16:56:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:33.658 16:56:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:33.658 16:56:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:33.658 16:56:25 -- common/autotest_common.sh@10 -- # set +x 00:05:33.658 16:56:25 -- spdk/autotest.sh@59 -- # create_test_list 00:05:33.658 16:56:25 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:33.658 16:56:25 -- common/autotest_common.sh@10 -- # set +x 00:05:33.658 16:56:26 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:33.658 16:56:26 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:33.658 16:56:26 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:33.658 16:56:26 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:33.658 16:56:26 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:33.658 16:56:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:33.659 16:56:26 -- common/autotest_common.sh@1455 -- # uname 00:05:33.659 16:56:26 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:33.659 16:56:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:33.659 16:56:26 -- common/autotest_common.sh@1475 -- # uname 00:05:33.659 16:56:26 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:33.659 16:56:26 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:05:33.659 16:56:26 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:05:33.659 16:56:26 -- spdk/autotest.sh@72 -- # hash lcov 00:05:33.659 16:56:26 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:33.659 16:56:26 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:05:33.659 --rc lcov_branch_coverage=1 00:05:33.659 --rc lcov_function_coverage=1 00:05:33.659 --rc genhtml_branch_coverage=1 00:05:33.659 --rc genhtml_function_coverage=1 00:05:33.659 --rc genhtml_legend=1 00:05:33.659 --rc geninfo_all_blocks=1 00:05:33.659 ' 00:05:33.659 16:56:26 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:05:33.659 --rc lcov_branch_coverage=1 00:05:33.659 --rc lcov_function_coverage=1 00:05:33.659 --rc genhtml_branch_coverage=1 00:05:33.659 --rc genhtml_function_coverage=1 00:05:33.659 --rc genhtml_legend=1 00:05:33.659 --rc geninfo_all_blocks=1 00:05:33.659 ' 00:05:33.659 16:56:26 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:05:33.659 --rc lcov_branch_coverage=1 00:05:33.659 --rc lcov_function_coverage=1 00:05:33.659 --rc genhtml_branch_coverage=1 00:05:33.659 --rc genhtml_function_coverage=1 00:05:33.659 --rc genhtml_legend=1 00:05:33.659 --rc geninfo_all_blocks=1 00:05:33.659 --no-external' 00:05:33.659 16:56:26 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:05:33.659 --rc lcov_branch_coverage=1 00:05:33.659 --rc lcov_function_coverage=1 00:05:33.659 --rc genhtml_branch_coverage=1 00:05:33.659 --rc genhtml_function_coverage=1 00:05:33.659 --rc genhtml_legend=1 00:05:33.659 --rc geninfo_all_blocks=1 00:05:33.659 --no-external' 00:05:33.659 16:56:26 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:05:33.917 lcov: LCOV version 1.14 00:05:33.918 16:56:26 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:48.806 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:48.806 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:01.010 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:01.010 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:01.010 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:01.010 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:01.010 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:01.010 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:01.010 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:01.010 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:01.010 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:01.010 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:01.010 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:01.010 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:01.010 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:01.011 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:01.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:01.012 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:01.012 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:01.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:01.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:01.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:01.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:04.352 16:56:56 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:06:04.352 16:56:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.352 16:56:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.352 16:56:56 -- spdk/autotest.sh@91 -- # rm -f 00:06:04.352 16:56:56 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:05.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:05.290 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:05.290 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:05.290 16:56:57 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:06:05.290 16:56:57 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:05.290 16:56:57 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:05.290 16:56:57 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:05.290 16:56:57 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:05.290 16:56:57 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:05.290 16:56:57 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:05.290 16:56:57 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:05.290 16:56:57 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:05.290 16:56:57 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:05.290 16:56:57 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:05.290 16:56:57 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:05.290 16:56:57 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:05.290 16:56:57 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:05.290 16:56:57 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:05.290 16:56:57 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:06:05.290 16:56:57 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:06:05.290 16:56:57 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:05.290 16:56:57 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:05.290 16:56:57 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:05.290 16:56:57 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:06:05.290 16:56:57 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:06:05.290 16:56:57 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:05.290 16:56:57 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:05.290 16:56:57 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:06:05.290 16:56:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:05.290 16:56:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:05.290 16:56:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:06:05.290 16:56:57 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:06:05.290 16:56:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:05.290 No valid GPT data, bailing 00:06:05.290 16:56:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:05.290 16:56:57 -- scripts/common.sh@391 -- # pt= 00:06:05.290 16:56:57 -- scripts/common.sh@392 -- # return 1 00:06:05.290 16:56:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:05.290 1+0 records in 00:06:05.290 1+0 records out 00:06:05.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00622618 s, 168 MB/s 00:06:05.290 16:56:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:05.290 16:56:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:05.290 16:56:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:06:05.290 16:56:57 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:06:05.290 16:56:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:05.290 No valid GPT data, bailing 00:06:05.290 16:56:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:05.290 16:56:57 -- scripts/common.sh@391 -- # pt= 00:06:05.290 16:56:57 -- scripts/common.sh@392 -- # return 1 00:06:05.290 16:56:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:05.290 1+0 records in 00:06:05.290 1+0 records out 00:06:05.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00378452 s, 277 MB/s 00:06:05.290 16:56:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:05.290 16:56:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:05.290 16:56:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:06:05.290 16:56:57 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:06:05.291 16:56:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:05.549 No valid GPT data, bailing 00:06:05.549 16:56:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:05.549 16:56:57 -- scripts/common.sh@391 -- # pt= 00:06:05.549 16:56:57 -- scripts/common.sh@392 -- # return 1 00:06:05.549 16:56:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:05.549 1+0 records in 00:06:05.549 1+0 records out 00:06:05.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00398388 s, 263 MB/s 00:06:05.549 16:56:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:05.549 16:56:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:05.549 16:56:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:06:05.549 16:56:57 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:06:05.549 16:56:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:05.549 No valid GPT data, bailing 00:06:05.549 16:56:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:05.549 16:56:57 -- scripts/common.sh@391 -- # pt= 00:06:05.549 16:56:57 -- scripts/common.sh@392 -- # return 1 00:06:05.549 16:56:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:05.549 1+0 records in 00:06:05.549 1+0 records out 00:06:05.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0038662 s, 271 MB/s 00:06:05.549 16:56:57 -- spdk/autotest.sh@118 -- # sync 00:06:05.549 16:56:57 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:05.549 16:56:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:05.549 16:56:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:08.083 16:57:00 -- spdk/autotest.sh@124 -- # uname -s 00:06:08.083 16:57:00 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:06:08.083 16:57:00 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:08.083 16:57:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.083 16:57:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.083 16:57:00 -- common/autotest_common.sh@10 -- # set +x 00:06:08.421 ************************************ 00:06:08.421 START TEST setup.sh 00:06:08.421 ************************************ 00:06:08.421 16:57:00 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:08.421 * Looking for test storage... 00:06:08.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:08.421 16:57:00 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:06:08.421 16:57:00 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:06:08.421 16:57:00 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:08.421 16:57:00 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.421 16:57:00 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.421 16:57:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:08.421 ************************************ 00:06:08.421 START TEST acl 00:06:08.421 ************************************ 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:08.421 * Looking for test storage... 00:06:08.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:08.421 16:57:00 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:08.421 16:57:00 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:08.421 16:57:00 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:06:08.421 16:57:00 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:06:08.421 16:57:00 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:06:08.421 16:57:00 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:06:08.421 16:57:00 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:06:08.422 16:57:00 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:08.422 16:57:00 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:09.357 16:57:01 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:06:09.357 16:57:01 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:06:09.357 16:57:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:09.357 16:57:01 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:06:09.357 16:57:01 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:06:09.357 16:57:01 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:10.294 16:57:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:06:10.294 16:57:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:10.294 16:57:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.294 Hugepages 00:06:10.294 node hugesize free / total 00:06:10.294 16:57:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:10.294 16:57:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:10.294 16:57:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.294 00:06:10.294 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:10.294 16:57:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:10.294 16:57:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:10.294 16:57:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.551 16:57:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:06:10.552 16:57:02 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:06:10.552 16:57:02 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.552 16:57:02 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.552 16:57:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:10.552 ************************************ 00:06:10.552 START TEST denied 00:06:10.552 ************************************ 00:06:10.552 16:57:02 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:06:10.552 16:57:02 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:06:10.552 16:57:02 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:06:10.552 16:57:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:06:10.552 16:57:02 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:10.552 16:57:02 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:06:11.491 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:06:11.491 16:57:03 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:06:11.491 16:57:03 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:06:11.491 16:57:03 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:06:11.491 16:57:03 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:06:11.491 16:57:03 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:06:11.491 16:57:03 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:11.750 16:57:03 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:11.750 16:57:03 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:06:11.750 16:57:03 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:11.750 16:57:03 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:12.318 00:06:12.318 real 0m1.776s 00:06:12.318 user 0m0.679s 00:06:12.318 sys 0m1.073s 00:06:12.318 16:57:04 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.318 16:57:04 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:06:12.318 ************************************ 00:06:12.318 END TEST denied 00:06:12.318 ************************************ 00:06:12.577 16:57:04 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:06:12.577 16:57:04 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.577 16:57:04 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.577 16:57:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:12.577 ************************************ 00:06:12.577 START TEST allowed 00:06:12.577 ************************************ 00:06:12.577 16:57:04 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:06:12.577 16:57:04 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:06:12.577 16:57:04 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:06:12.577 16:57:04 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:06:12.577 16:57:04 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:06:12.577 16:57:04 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:13.512 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:13.512 16:57:05 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:06:13.512 16:57:05 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:06:13.512 16:57:05 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:06:13.512 16:57:05 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:06:13.512 16:57:05 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:06:13.512 16:57:05 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:13.512 16:57:05 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:13.512 16:57:05 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:06:13.512 16:57:05 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:13.512 16:57:05 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:14.448 00:06:14.448 real 0m1.751s 00:06:14.448 user 0m0.732s 00:06:14.448 sys 0m1.047s 00:06:14.448 16:57:06 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.448 ************************************ 00:06:14.448 END TEST allowed 00:06:14.448 ************************************ 00:06:14.448 16:57:06 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:06:14.448 00:06:14.448 real 0m5.900s 00:06:14.448 user 0m2.389s 00:06:14.448 sys 0m3.560s 00:06:14.448 16:57:06 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.448 16:57:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:14.448 ************************************ 00:06:14.448 END TEST acl 00:06:14.448 ************************************ 00:06:14.448 16:57:06 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:14.448 16:57:06 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.448 16:57:06 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.448 16:57:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:14.448 ************************************ 00:06:14.448 START TEST hugepages 00:06:14.448 ************************************ 00:06:14.448 16:57:06 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:14.448 * Looking for test storage... 00:06:14.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6006484 kB' 'MemAvailable: 7402064 kB' 'Buffers: 2436 kB' 'Cached: 1609900 kB' 'SwapCached: 0 kB' 'Active: 441436 kB' 'Inactive: 1281004 kB' 'Active(anon): 120592 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 111684 kB' 'Mapped: 48796 kB' 'Shmem: 10488 kB' 'KReclaimable: 61788 kB' 'Slab: 135808 kB' 'SReclaimable: 61788 kB' 'SUnreclaim: 74020 kB' 'KernelStack: 6352 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 343884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.448 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:14.449 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:14.450 16:57:06 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:14.450 16:57:06 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.450 16:57:06 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.450 16:57:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:14.450 ************************************ 00:06:14.450 START TEST default_setup 00:06:14.450 ************************************ 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:06:14.450 16:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:15.385 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:15.385 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:15.648 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8098844 kB' 'MemAvailable: 9494276 kB' 'Buffers: 2436 kB' 'Cached: 1609892 kB' 'SwapCached: 0 kB' 'Active: 453552 kB' 'Inactive: 1281004 kB' 'Active(anon): 132708 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123864 kB' 'Mapped: 48912 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135452 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73956 kB' 'KernelStack: 6432 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.648 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8098844 kB' 'MemAvailable: 9494276 kB' 'Buffers: 2436 kB' 'Cached: 1609892 kB' 'SwapCached: 0 kB' 'Active: 453284 kB' 'Inactive: 1281004 kB' 'Active(anon): 132440 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123512 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135420 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73924 kB' 'KernelStack: 6384 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.649 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.650 16:57:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8098844 kB' 'MemAvailable: 9494292 kB' 'Buffers: 2436 kB' 'Cached: 1609892 kB' 'SwapCached: 0 kB' 'Active: 452932 kB' 'Inactive: 1281020 kB' 'Active(anon): 132088 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281020 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123200 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135420 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73924 kB' 'KernelStack: 6400 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.650 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:06:15.651 nr_hugepages=1024 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:15.651 resv_hugepages=0 00:06:15.651 surplus_hugepages=0 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:15.651 anon_hugepages=0 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8098844 kB' 'MemAvailable: 9494292 kB' 'Buffers: 2436 kB' 'Cached: 1609892 kB' 'SwapCached: 0 kB' 'Active: 453372 kB' 'Inactive: 1281020 kB' 'Active(anon): 132528 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281020 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123672 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135420 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73924 kB' 'KernelStack: 6416 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.651 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8098852 kB' 'MemUsed: 4143120 kB' 'SwapCached: 0 kB' 'Active: 453240 kB' 'Inactive: 1281016 kB' 'Active(anon): 132396 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281016 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 1612324 kB' 'Mapped: 48816 kB' 'AnonPages: 123544 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61496 kB' 'Slab: 135432 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.652 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:15.653 node0=1024 expecting 1024 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:15.653 00:06:15.653 real 0m1.191s 00:06:15.653 user 0m0.502s 00:06:15.653 sys 0m0.649s 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.653 16:57:08 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:06:15.653 ************************************ 00:06:15.653 END TEST default_setup 00:06:15.653 ************************************ 00:06:15.912 16:57:08 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:15.912 16:57:08 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.912 16:57:08 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.912 16:57:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:15.912 ************************************ 00:06:15.912 START TEST per_node_1G_alloc 00:06:15.912 ************************************ 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:15.912 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:16.171 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:16.435 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:16.435 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9144100 kB' 'MemAvailable: 10539548 kB' 'Buffers: 2436 kB' 'Cached: 1609892 kB' 'SwapCached: 0 kB' 'Active: 453528 kB' 'Inactive: 1281020 kB' 'Active(anon): 132684 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281020 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 123808 kB' 'Mapped: 48964 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135380 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73884 kB' 'KernelStack: 6416 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 357800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.435 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.436 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9144100 kB' 'MemAvailable: 10539552 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 453276 kB' 'Inactive: 1281024 kB' 'Active(anon): 132432 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123624 kB' 'Mapped: 48824 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135388 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73892 kB' 'KernelStack: 6384 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.437 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.438 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9144100 kB' 'MemAvailable: 10539552 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 453072 kB' 'Inactive: 1281024 kB' 'Active(anon): 132228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123440 kB' 'Mapped: 48824 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135388 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73892 kB' 'KernelStack: 6400 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.439 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.440 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:16.441 nr_hugepages=512 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:16.441 resv_hugepages=0 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:16.441 surplus_hugepages=0 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:16.441 anon_hugepages=0 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9144100 kB' 'MemAvailable: 10539552 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 453072 kB' 'Inactive: 1281024 kB' 'Active(anon): 132228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123440 kB' 'Mapped: 48824 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135388 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73892 kB' 'KernelStack: 6468 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.441 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.442 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9144100 kB' 'MemUsed: 3097872 kB' 'SwapCached: 0 kB' 'Active: 453012 kB' 'Inactive: 1281024 kB' 'Active(anon): 132168 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 1612332 kB' 'Mapped: 48816 kB' 'AnonPages: 123400 kB' 'Shmem: 10464 kB' 'KernelStack: 6448 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61496 kB' 'Slab: 135388 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.443 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:16.444 node0=512 expecting 512 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:16.444 00:06:16.444 real 0m0.702s 00:06:16.444 user 0m0.330s 00:06:16.444 sys 0m0.428s 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.444 16:57:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:16.444 ************************************ 00:06:16.444 END TEST per_node_1G_alloc 00:06:16.444 ************************************ 00:06:16.444 16:57:08 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:16.444 16:57:08 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.445 16:57:08 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.445 16:57:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:16.703 ************************************ 00:06:16.703 START TEST even_2G_alloc 00:06:16.703 ************************************ 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:16.703 16:57:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:16.961 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:16.961 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:17.226 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8092904 kB' 'MemAvailable: 9488356 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 453140 kB' 'Inactive: 1281024 kB' 'Active(anon): 132296 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123412 kB' 'Mapped: 48944 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135368 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73872 kB' 'KernelStack: 6440 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55156 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.226 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.227 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8092904 kB' 'MemAvailable: 9488356 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 452800 kB' 'Inactive: 1281024 kB' 'Active(anon): 131956 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123332 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135364 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73868 kB' 'KernelStack: 6416 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.228 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.229 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8093016 kB' 'MemAvailable: 9488468 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 452968 kB' 'Inactive: 1281024 kB' 'Active(anon): 132124 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123492 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135364 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73868 kB' 'KernelStack: 6400 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.230 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.231 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:17.232 nr_hugepages=1024 00:06:17.232 resv_hugepages=0 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:17.232 surplus_hugepages=0 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:17.232 anon_hugepages=0 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.232 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8093268 kB' 'MemAvailable: 9488720 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 453016 kB' 'Inactive: 1281024 kB' 'Active(anon): 132172 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123544 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135364 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73868 kB' 'KernelStack: 6400 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.233 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.234 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8093268 kB' 'MemUsed: 4148704 kB' 'SwapCached: 0 kB' 'Active: 452856 kB' 'Inactive: 1281024 kB' 'Active(anon): 132012 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 1612332 kB' 'Mapped: 48816 kB' 'AnonPages: 123120 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61496 kB' 'Slab: 135364 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.235 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:17.236 node0=1024 expecting 1024 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:17.236 00:06:17.236 real 0m0.729s 00:06:17.236 user 0m0.348s 00:06:17.236 sys 0m0.430s 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.236 16:57:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:17.236 ************************************ 00:06:17.236 END TEST even_2G_alloc 00:06:17.236 ************************************ 00:06:17.236 16:57:09 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:17.236 16:57:09 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.236 16:57:09 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.495 16:57:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:17.495 ************************************ 00:06:17.495 START TEST odd_alloc 00:06:17.495 ************************************ 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:17.495 16:57:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:17.754 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:18.017 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:18.017 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8090524 kB' 'MemAvailable: 9485976 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 453180 kB' 'Inactive: 1281024 kB' 'Active(anon): 132336 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 123500 kB' 'Mapped: 48944 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135380 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73884 kB' 'KernelStack: 6440 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 355716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.017 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.018 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8090524 kB' 'MemAvailable: 9485976 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 453156 kB' 'Inactive: 1281024 kB' 'Active(anon): 132312 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 123444 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135380 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73884 kB' 'KernelStack: 6416 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 355716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.019 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.020 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8090524 kB' 'MemAvailable: 9485976 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 452856 kB' 'Inactive: 1281024 kB' 'Active(anon): 132012 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 123440 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135376 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73880 kB' 'KernelStack: 6416 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 355716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.021 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:18.022 nr_hugepages=1025 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:18.022 resv_hugepages=0 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:18.022 surplus_hugepages=0 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:18.022 anon_hugepages=0 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.022 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8090524 kB' 'MemAvailable: 9485976 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 453100 kB' 'Inactive: 1281024 kB' 'Active(anon): 132256 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 123440 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135376 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73880 kB' 'KernelStack: 6416 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 355716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.023 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.024 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8090784 kB' 'MemUsed: 4151188 kB' 'SwapCached: 0 kB' 'Active: 452880 kB' 'Inactive: 1281024 kB' 'Active(anon): 132036 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 1612332 kB' 'Mapped: 48816 kB' 'AnonPages: 123188 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61496 kB' 'Slab: 135376 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.284 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:18.285 node0=1025 expecting 1025 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:06:18.285 00:06:18.285 real 0m0.801s 00:06:18.285 user 0m0.370s 00:06:18.285 sys 0m0.457s 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.285 16:57:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:18.285 ************************************ 00:06:18.285 END TEST odd_alloc 00:06:18.285 ************************************ 00:06:18.285 16:57:10 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:18.285 16:57:10 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.285 16:57:10 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.285 16:57:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:18.285 ************************************ 00:06:18.285 START TEST custom_alloc 00:06:18.285 ************************************ 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:18.285 16:57:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:18.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:18.856 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:18.856 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.856 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9140780 kB' 'MemAvailable: 10536232 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 448608 kB' 'Inactive: 1281024 kB' 'Active(anon): 127764 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 118912 kB' 'Mapped: 48388 kB' 'Shmem: 10464 kB' 'KReclaimable: 61496 kB' 'Slab: 135384 kB' 'SReclaimable: 61496 kB' 'SUnreclaim: 73888 kB' 'KernelStack: 6336 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.857 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9140780 kB' 'MemAvailable: 10536228 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 448468 kB' 'Inactive: 1281024 kB' 'Active(anon): 127624 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 118792 kB' 'Mapped: 48376 kB' 'Shmem: 10464 kB' 'KReclaimable: 61484 kB' 'Slab: 135236 kB' 'SReclaimable: 61484 kB' 'SUnreclaim: 73752 kB' 'KernelStack: 6288 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.858 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.859 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.860 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9140780 kB' 'MemAvailable: 10536232 kB' 'Buffers: 2436 kB' 'Cached: 1609900 kB' 'SwapCached: 0 kB' 'Active: 448448 kB' 'Inactive: 1281028 kB' 'Active(anon): 127604 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 118768 kB' 'Mapped: 48276 kB' 'Shmem: 10464 kB' 'KReclaimable: 61484 kB' 'Slab: 135212 kB' 'SReclaimable: 61484 kB' 'SUnreclaim: 73728 kB' 'KernelStack: 6288 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.861 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.862 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:18.863 nr_hugepages=512 00:06:18.863 resv_hugepages=0 00:06:18.863 surplus_hugepages=0 00:06:18.863 anon_hugepages=0 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9140780 kB' 'MemAvailable: 10536228 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 448008 kB' 'Inactive: 1281024 kB' 'Active(anon): 127164 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 118600 kB' 'Mapped: 48076 kB' 'Shmem: 10464 kB' 'KReclaimable: 61484 kB' 'Slab: 135196 kB' 'SReclaimable: 61484 kB' 'SUnreclaim: 73712 kB' 'KernelStack: 6320 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.863 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.864 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9140528 kB' 'MemUsed: 3101444 kB' 'SwapCached: 0 kB' 'Active: 447952 kB' 'Inactive: 1281024 kB' 'Active(anon): 127108 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'FilePages: 1612332 kB' 'Mapped: 48076 kB' 'AnonPages: 118504 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61484 kB' 'Slab: 135184 kB' 'SReclaimable: 61484 kB' 'SUnreclaim: 73700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.865 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.866 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:18.867 node0=512 expecting 512 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:18.867 00:06:18.867 real 0m0.730s 00:06:18.867 user 0m0.326s 00:06:18.867 sys 0m0.416s 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.867 ************************************ 00:06:18.867 END TEST custom_alloc 00:06:18.867 ************************************ 00:06:18.867 16:57:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:19.125 16:57:11 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:19.125 16:57:11 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.125 16:57:11 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.126 16:57:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:19.126 ************************************ 00:06:19.126 START TEST no_shrink_alloc 00:06:19.126 ************************************ 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:19.126 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:19.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:19.695 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:19.695 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:19.695 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:19.695 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:19.695 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:19.695 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:19.695 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:19.695 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:19.695 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:19.695 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:19.695 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8097160 kB' 'MemAvailable: 9492608 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 448544 kB' 'Inactive: 1281024 kB' 'Active(anon): 127700 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 118844 kB' 'Mapped: 48156 kB' 'Shmem: 10464 kB' 'KReclaimable: 61484 kB' 'Slab: 135144 kB' 'SReclaimable: 61484 kB' 'SUnreclaim: 73660 kB' 'KernelStack: 6324 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.696 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8097668 kB' 'MemAvailable: 9493116 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 448320 kB' 'Inactive: 1281024 kB' 'Active(anon): 127476 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 118608 kB' 'Mapped: 48088 kB' 'Shmem: 10464 kB' 'KReclaimable: 61484 kB' 'Slab: 135136 kB' 'SReclaimable: 61484 kB' 'SUnreclaim: 73652 kB' 'KernelStack: 6276 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.697 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.698 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:19.699 16:57:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8097416 kB' 'MemAvailable: 9492864 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 448216 kB' 'Inactive: 1281024 kB' 'Active(anon): 127372 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 118552 kB' 'Mapped: 48080 kB' 'Shmem: 10464 kB' 'KReclaimable: 61484 kB' 'Slab: 135136 kB' 'SReclaimable: 61484 kB' 'SUnreclaim: 73652 kB' 'KernelStack: 6288 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.699 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.700 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:19.701 nr_hugepages=1024 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:19.701 resv_hugepages=0 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:19.701 surplus_hugepages=0 00:06:19.701 anon_hugepages=0 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8097416 kB' 'MemAvailable: 9492864 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 448312 kB' 'Inactive: 1281024 kB' 'Active(anon): 127468 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 118628 kB' 'Mapped: 48080 kB' 'Shmem: 10464 kB' 'KReclaimable: 61484 kB' 'Slab: 135136 kB' 'SReclaimable: 61484 kB' 'SUnreclaim: 73652 kB' 'KernelStack: 6304 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.701 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.702 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8097168 kB' 'MemUsed: 4144804 kB' 'SwapCached: 0 kB' 'Active: 448360 kB' 'Inactive: 1281024 kB' 'Active(anon): 127516 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1612332 kB' 'Mapped: 48080 kB' 'AnonPages: 118644 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61484 kB' 'Slab: 135136 kB' 'SReclaimable: 61484 kB' 'SUnreclaim: 73652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.703 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.704 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:19.705 node0=1024 expecting 1024 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:19.705 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:20.274 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:20.274 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:20.274 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:20.274 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8104160 kB' 'MemAvailable: 9499608 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 449076 kB' 'Inactive: 1281024 kB' 'Active(anon): 128232 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 119316 kB' 'Mapped: 48468 kB' 'Shmem: 10464 kB' 'KReclaimable: 61484 kB' 'Slab: 135036 kB' 'SReclaimable: 61484 kB' 'SUnreclaim: 73552 kB' 'KernelStack: 6392 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.274 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.275 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8104160 kB' 'MemAvailable: 9499608 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 448368 kB' 'Inactive: 1281024 kB' 'Active(anon): 127524 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118636 kB' 'Mapped: 48080 kB' 'Shmem: 10464 kB' 'KReclaimable: 61484 kB' 'Slab: 135040 kB' 'SReclaimable: 61484 kB' 'SUnreclaim: 73556 kB' 'KernelStack: 6304 kB' 'PageTables: 3748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.276 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.277 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.538 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.538 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.538 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.538 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.538 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.538 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.538 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.538 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.538 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.538 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.538 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.538 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.538 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8104160 kB' 'MemAvailable: 9499608 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 448316 kB' 'Inactive: 1281024 kB' 'Active(anon): 127472 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118632 kB' 'Mapped: 48080 kB' 'Shmem: 10464 kB' 'KReclaimable: 61484 kB' 'Slab: 135036 kB' 'SReclaimable: 61484 kB' 'SUnreclaim: 73552 kB' 'KernelStack: 6304 kB' 'PageTables: 3748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.539 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.540 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:20.541 nr_hugepages=1024 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:20.541 resv_hugepages=0 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:20.541 surplus_hugepages=0 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:20.541 anon_hugepages=0 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8104160 kB' 'MemAvailable: 9499608 kB' 'Buffers: 2436 kB' 'Cached: 1609896 kB' 'SwapCached: 0 kB' 'Active: 448260 kB' 'Inactive: 1281024 kB' 'Active(anon): 127416 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118524 kB' 'Mapped: 48080 kB' 'Shmem: 10464 kB' 'KReclaimable: 61484 kB' 'Slab: 135036 kB' 'SReclaimable: 61484 kB' 'SUnreclaim: 73552 kB' 'KernelStack: 6288 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.541 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.542 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8104160 kB' 'MemUsed: 4137812 kB' 'SwapCached: 0 kB' 'Active: 448260 kB' 'Inactive: 1281024 kB' 'Active(anon): 127416 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1281024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1612332 kB' 'Mapped: 48080 kB' 'AnonPages: 118524 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61484 kB' 'Slab: 135036 kB' 'SReclaimable: 61484 kB' 'SUnreclaim: 73552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.543 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:20.544 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:20.545 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:20.545 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:20.545 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:20.545 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:20.545 node0=1024 expecting 1024 00:06:20.545 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:20.545 16:57:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:20.545 00:06:20.545 real 0m1.496s 00:06:20.545 user 0m0.684s 00:06:20.545 sys 0m0.855s 00:06:20.545 ************************************ 00:06:20.545 16:57:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.545 16:57:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:20.545 END TEST no_shrink_alloc 00:06:20.545 ************************************ 00:06:20.545 16:57:12 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:06:20.545 16:57:12 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:20.545 16:57:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:20.545 16:57:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:20.545 16:57:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:20.545 16:57:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:20.545 16:57:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:20.545 16:57:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:20.545 16:57:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:20.545 00:06:20.545 real 0m6.241s 00:06:20.545 user 0m2.754s 00:06:20.545 sys 0m3.617s 00:06:20.545 16:57:12 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.545 16:57:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:20.545 ************************************ 00:06:20.545 END TEST hugepages 00:06:20.545 ************************************ 00:06:20.545 16:57:12 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:20.545 16:57:12 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.545 16:57:12 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.545 16:57:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:20.545 ************************************ 00:06:20.545 START TEST driver 00:06:20.545 ************************************ 00:06:20.545 16:57:12 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:20.803 * Looking for test storage... 00:06:20.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:20.803 16:57:13 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:06:20.803 16:57:13 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:20.803 16:57:13 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:21.752 16:57:13 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:21.752 16:57:13 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.752 16:57:13 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.752 16:57:13 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:21.752 ************************************ 00:06:21.752 START TEST guess_driver 00:06:21.752 ************************************ 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:06:21.752 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:21.752 Looking for driver=uio_pci_generic 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:21.752 16:57:13 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:22.692 16:57:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:06:22.692 16:57:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:06:22.692 16:57:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:22.692 16:57:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:22.692 16:57:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:22.692 16:57:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:22.692 16:57:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:22.692 16:57:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:22.692 16:57:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:22.692 16:57:15 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:22.692 16:57:15 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:22.692 16:57:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:22.692 16:57:15 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:23.629 ************************************ 00:06:23.629 END TEST guess_driver 00:06:23.629 ************************************ 00:06:23.629 00:06:23.629 real 0m1.916s 00:06:23.629 user 0m0.668s 00:06:23.629 sys 0m1.290s 00:06:23.629 16:57:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.629 16:57:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:23.629 ************************************ 00:06:23.629 END TEST driver 00:06:23.629 ************************************ 00:06:23.629 00:06:23.629 real 0m2.924s 00:06:23.629 user 0m1.012s 00:06:23.629 sys 0m2.049s 00:06:23.629 16:57:15 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.629 16:57:15 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:23.629 16:57:15 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:23.629 16:57:15 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.629 16:57:15 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.629 16:57:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:23.629 ************************************ 00:06:23.629 START TEST devices 00:06:23.629 ************************************ 00:06:23.629 16:57:15 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:23.629 * Looking for test storage... 00:06:23.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:23.889 16:57:16 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:23.889 16:57:16 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:23.889 16:57:16 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:23.889 16:57:16 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:24.825 16:57:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:24.825 16:57:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:24.825 16:57:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:24.825 No valid GPT data, bailing 00:06:24.825 16:57:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:24.825 16:57:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:24.825 16:57:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:24.825 16:57:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:24.825 16:57:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:24.825 16:57:17 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:06:24.825 16:57:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:06:24.825 16:57:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:06:24.825 No valid GPT data, bailing 00:06:24.825 16:57:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:06:24.825 16:57:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:24.825 16:57:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:06:24.825 16:57:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:06:24.825 16:57:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:06:24.825 16:57:17 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:24.825 16:57:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:24.826 16:57:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:24.826 16:57:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:06:24.826 16:57:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:24.826 16:57:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:24.826 16:57:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:24.826 16:57:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:06:24.826 16:57:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:06:24.826 16:57:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:06:24.826 No valid GPT data, bailing 00:06:24.826 16:57:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:06:25.085 16:57:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:25.085 16:57:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:06:25.085 16:57:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:06:25.085 16:57:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:06:25.085 16:57:17 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:06:25.085 16:57:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:06:25.085 16:57:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:06:25.085 No valid GPT data, bailing 00:06:25.085 16:57:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:25.085 16:57:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:25.085 16:57:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:06:25.085 16:57:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:06:25.085 16:57:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:06:25.085 16:57:17 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:25.085 16:57:17 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:25.085 16:57:17 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.085 16:57:17 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.085 16:57:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:25.085 ************************************ 00:06:25.085 START TEST nvme_mount 00:06:25.085 ************************************ 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:25.085 16:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:26.022 Creating new GPT entries in memory. 00:06:26.022 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:26.022 other utilities. 00:06:26.022 16:57:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:26.022 16:57:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:26.022 16:57:18 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:26.022 16:57:18 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:26.022 16:57:18 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:27.416 Creating new GPT entries in memory. 00:06:27.416 The operation has completed successfully. 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57000 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:27.416 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:27.675 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:27.675 16:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:27.675 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:27.675 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:27.934 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:27.934 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:27.934 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:27.934 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:27.934 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:27.934 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:27.934 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:27.934 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:27.934 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:27.934 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:27.934 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:27.934 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:27.934 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:28.194 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:28.194 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:28.195 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:28.195 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:28.195 16:57:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:28.454 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:28.454 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:28.454 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:28.454 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.454 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:28.454 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.713 16:57:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:28.713 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.713 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:28.713 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:28.973 16:57:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:29.233 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:29.233 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:29.233 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:29.233 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.233 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:29.233 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.493 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:29.493 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.493 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:29.493 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.493 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:29.493 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:29.493 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:29.493 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:29.493 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:29.493 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:29.493 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:29.493 16:57:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:29.493 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:29.493 00:06:29.493 real 0m4.547s 00:06:29.493 user 0m0.892s 00:06:29.493 sys 0m1.432s 00:06:29.493 16:57:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.493 16:57:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:29.493 ************************************ 00:06:29.493 END TEST nvme_mount 00:06:29.493 ************************************ 00:06:29.752 16:57:22 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:29.752 16:57:22 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.752 16:57:22 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.752 16:57:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:29.752 ************************************ 00:06:29.752 START TEST dm_mount 00:06:29.752 ************************************ 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:29.752 16:57:22 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:30.688 Creating new GPT entries in memory. 00:06:30.688 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:30.688 other utilities. 00:06:30.688 16:57:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:30.688 16:57:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:30.688 16:57:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:30.688 16:57:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:30.688 16:57:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:32.067 Creating new GPT entries in memory. 00:06:32.067 The operation has completed successfully. 00:06:32.067 16:57:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:32.067 16:57:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:32.067 16:57:24 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:32.067 16:57:24 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:32.067 16:57:24 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:33.003 The operation has completed successfully. 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57436 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:33.003 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:33.004 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:33.004 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:33.004 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:33.004 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.004 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:33.004 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:33.004 16:57:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:33.004 16:57:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:33.262 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:33.262 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:33.262 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:33.262 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.262 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:33.262 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.262 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:33.262 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.262 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:33.262 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:33.522 16:57:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:33.781 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:33.781 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:33.781 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:33.781 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.781 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:33.781 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:34.040 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:34.040 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:34.040 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:34.040 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:34.300 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:34.300 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:34.300 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:34.300 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:34.300 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:34.300 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:34.300 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:34.300 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:34.300 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:34.300 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:34.300 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:34.300 16:57:26 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:34.300 00:06:34.300 real 0m4.609s 00:06:34.300 user 0m0.565s 00:06:34.300 sys 0m0.994s 00:06:34.300 16:57:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.300 16:57:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:34.300 ************************************ 00:06:34.300 END TEST dm_mount 00:06:34.300 ************************************ 00:06:34.300 16:57:26 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:34.300 16:57:26 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:34.300 16:57:26 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:34.300 16:57:26 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:34.300 16:57:26 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:34.300 16:57:26 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:34.300 16:57:26 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:34.559 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:34.559 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:34.559 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:34.559 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:34.559 16:57:26 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:34.559 16:57:26 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:34.559 16:57:27 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:34.559 16:57:27 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:34.559 16:57:27 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:34.559 16:57:27 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:34.559 16:57:27 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:34.559 00:06:34.559 real 0m11.035s 00:06:34.559 user 0m2.195s 00:06:34.559 sys 0m3.254s 00:06:34.559 16:57:27 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.559 ************************************ 00:06:34.559 END TEST devices 00:06:34.559 ************************************ 00:06:34.559 16:57:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:34.818 00:06:34.818 real 0m26.500s 00:06:34.818 user 0m8.503s 00:06:34.818 sys 0m12.733s 00:06:34.818 16:57:27 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.818 16:57:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:34.818 ************************************ 00:06:34.818 END TEST setup.sh 00:06:34.818 ************************************ 00:06:34.818 16:57:27 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:35.773 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:35.773 Hugepages 00:06:35.773 node hugesize free / total 00:06:35.773 node0 1048576kB 0 / 0 00:06:35.773 node0 2048kB 2048 / 2048 00:06:35.773 00:06:35.773 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:35.773 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:35.773 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:36.032 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:06:36.032 16:57:28 -- spdk/autotest.sh@130 -- # uname -s 00:06:36.032 16:57:28 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:36.032 16:57:28 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:36.032 16:57:28 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:36.967 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:36.967 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:36.967 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:37.225 16:57:29 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:38.179 16:57:30 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:38.179 16:57:30 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:38.179 16:57:30 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:38.179 16:57:30 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:38.179 16:57:30 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:38.179 16:57:30 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:38.179 16:57:30 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:38.179 16:57:30 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:38.179 16:57:30 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:38.179 16:57:30 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:06:38.179 16:57:30 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:38.179 16:57:30 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:38.745 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:38.745 Waiting for block devices as requested 00:06:38.745 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:39.004 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:39.004 16:57:31 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:39.004 16:57:31 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:39.004 16:57:31 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:39.004 16:57:31 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:06:39.004 16:57:31 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:39.004 16:57:31 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:39.004 16:57:31 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:39.004 16:57:31 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:06:39.004 16:57:31 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:06:39.004 16:57:31 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:06:39.004 16:57:31 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:06:39.004 16:57:31 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:39.004 16:57:31 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:39.004 16:57:31 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:39.004 16:57:31 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:39.004 16:57:31 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:39.004 16:57:31 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:06:39.004 16:57:31 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:39.004 16:57:31 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:39.004 16:57:31 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:39.004 16:57:31 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:39.004 16:57:31 -- common/autotest_common.sh@1557 -- # continue 00:06:39.004 16:57:31 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:39.004 16:57:31 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:39.004 16:57:31 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:39.004 16:57:31 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:06:39.004 16:57:31 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:39.004 16:57:31 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:39.004 16:57:31 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:39.004 16:57:31 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:39.004 16:57:31 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:39.004 16:57:31 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:39.004 16:57:31 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:39.004 16:57:31 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:39.004 16:57:31 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:39.004 16:57:31 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:39.004 16:57:31 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:39.004 16:57:31 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:39.004 16:57:31 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:39.004 16:57:31 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:39.004 16:57:31 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:39.004 16:57:31 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:39.004 16:57:31 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:39.004 16:57:31 -- common/autotest_common.sh@1557 -- # continue 00:06:39.004 16:57:31 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:39.004 16:57:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:39.004 16:57:31 -- common/autotest_common.sh@10 -- # set +x 00:06:39.262 16:57:31 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:39.262 16:57:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.262 16:57:31 -- common/autotest_common.sh@10 -- # set +x 00:06:39.262 16:57:31 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:40.198 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:40.198 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:40.198 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:40.198 16:57:32 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:40.198 16:57:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.198 16:57:32 -- common/autotest_common.sh@10 -- # set +x 00:06:40.198 16:57:32 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:40.198 16:57:32 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:40.198 16:57:32 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:40.198 16:57:32 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:40.198 16:57:32 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:40.198 16:57:32 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:40.198 16:57:32 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:40.198 16:57:32 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:40.198 16:57:32 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:40.198 16:57:32 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:40.198 16:57:32 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:40.457 16:57:32 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:06:40.457 16:57:32 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:40.457 16:57:32 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:40.457 16:57:32 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:40.457 16:57:32 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:40.457 16:57:32 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:40.457 16:57:32 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:40.457 16:57:32 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:40.457 16:57:32 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:40.457 16:57:32 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:40.457 16:57:32 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:40.457 16:57:32 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:40.457 16:57:32 -- common/autotest_common.sh@1593 -- # return 0 00:06:40.457 16:57:32 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:40.457 16:57:32 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:40.457 16:57:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:40.457 16:57:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:40.457 16:57:32 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:40.457 16:57:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:40.457 16:57:32 -- common/autotest_common.sh@10 -- # set +x 00:06:40.457 16:57:32 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:40.457 16:57:32 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:40.457 16:57:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.457 16:57:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.457 16:57:32 -- common/autotest_common.sh@10 -- # set +x 00:06:40.457 ************************************ 00:06:40.457 START TEST env 00:06:40.457 ************************************ 00:06:40.457 16:57:32 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:40.457 * Looking for test storage... 00:06:40.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:40.457 16:57:32 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:40.457 16:57:32 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.457 16:57:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.457 16:57:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:40.716 ************************************ 00:06:40.716 START TEST env_memory 00:06:40.716 ************************************ 00:06:40.716 16:57:32 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:40.716 00:06:40.716 00:06:40.716 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.716 http://cunit.sourceforge.net/ 00:06:40.716 00:06:40.716 00:06:40.716 Suite: memory 00:06:40.716 Test: alloc and free memory map ...[2024-07-25 16:57:32.970047] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:40.716 passed 00:06:40.717 Test: mem map translation ...[2024-07-25 16:57:32.991303] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:40.717 [2024-07-25 16:57:32.991529] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:40.717 [2024-07-25 16:57:32.991726] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:40.717 [2024-07-25 16:57:32.991811] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:40.717 passed 00:06:40.717 Test: mem map registration ...[2024-07-25 16:57:33.031270] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:40.717 [2024-07-25 16:57:33.031489] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:40.717 passed 00:06:40.717 Test: mem map adjacent registrations ...passed 00:06:40.717 00:06:40.717 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.717 suites 1 1 n/a 0 0 00:06:40.717 tests 4 4 4 0 0 00:06:40.717 asserts 152 152 152 0 n/a 00:06:40.717 00:06:40.717 Elapsed time = 0.140 seconds 00:06:40.717 00:06:40.717 real 0m0.161s 00:06:40.717 user 0m0.140s 00:06:40.717 sys 0m0.016s 00:06:40.717 16:57:33 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.717 16:57:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:40.717 ************************************ 00:06:40.717 END TEST env_memory 00:06:40.717 ************************************ 00:06:40.717 16:57:33 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:40.717 16:57:33 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.717 16:57:33 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.717 16:57:33 env -- common/autotest_common.sh@10 -- # set +x 00:06:40.717 ************************************ 00:06:40.717 START TEST env_vtophys 00:06:40.717 ************************************ 00:06:40.717 16:57:33 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:40.976 EAL: lib.eal log level changed from notice to debug 00:06:40.976 EAL: Detected lcore 0 as core 0 on socket 0 00:06:40.976 EAL: Detected lcore 1 as core 0 on socket 0 00:06:40.976 EAL: Detected lcore 2 as core 0 on socket 0 00:06:40.976 EAL: Detected lcore 3 as core 0 on socket 0 00:06:40.976 EAL: Detected lcore 4 as core 0 on socket 0 00:06:40.976 EAL: Detected lcore 5 as core 0 on socket 0 00:06:40.976 EAL: Detected lcore 6 as core 0 on socket 0 00:06:40.976 EAL: Detected lcore 7 as core 0 on socket 0 00:06:40.976 EAL: Detected lcore 8 as core 0 on socket 0 00:06:40.976 EAL: Detected lcore 9 as core 0 on socket 0 00:06:40.976 EAL: Maximum logical cores by configuration: 128 00:06:40.976 EAL: Detected CPU lcores: 10 00:06:40.976 EAL: Detected NUMA nodes: 1 00:06:40.976 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:40.976 EAL: Detected shared linkage of DPDK 00:06:40.976 EAL: No shared files mode enabled, IPC will be disabled 00:06:40.976 EAL: Selected IOVA mode 'PA' 00:06:40.976 EAL: Probing VFIO support... 00:06:40.976 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:40.976 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:40.976 EAL: Ask a virtual area of 0x2e000 bytes 00:06:40.976 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:40.976 EAL: Setting up physically contiguous memory... 00:06:40.976 EAL: Setting maximum number of open files to 524288 00:06:40.976 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:40.976 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:40.976 EAL: Ask a virtual area of 0x61000 bytes 00:06:40.976 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:40.976 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:40.976 EAL: Ask a virtual area of 0x400000000 bytes 00:06:40.976 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:40.976 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:40.976 EAL: Ask a virtual area of 0x61000 bytes 00:06:40.976 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:40.976 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:40.976 EAL: Ask a virtual area of 0x400000000 bytes 00:06:40.976 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:40.976 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:40.976 EAL: Ask a virtual area of 0x61000 bytes 00:06:40.976 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:40.976 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:40.976 EAL: Ask a virtual area of 0x400000000 bytes 00:06:40.976 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:40.976 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:40.976 EAL: Ask a virtual area of 0x61000 bytes 00:06:40.976 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:40.976 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:40.977 EAL: Ask a virtual area of 0x400000000 bytes 00:06:40.977 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:40.977 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:40.977 EAL: Hugepages will be freed exactly as allocated. 00:06:40.977 EAL: No shared files mode enabled, IPC is disabled 00:06:40.977 EAL: No shared files mode enabled, IPC is disabled 00:06:40.977 EAL: TSC frequency is ~2490000 KHz 00:06:40.977 EAL: Main lcore 0 is ready (tid=7f412653ea00;cpuset=[0]) 00:06:40.977 EAL: Trying to obtain current memory policy. 00:06:40.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.977 EAL: Restoring previous memory policy: 0 00:06:40.977 EAL: request: mp_malloc_sync 00:06:40.977 EAL: No shared files mode enabled, IPC is disabled 00:06:40.977 EAL: Heap on socket 0 was expanded by 2MB 00:06:40.977 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:40.977 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:40.977 EAL: Mem event callback 'spdk:(nil)' registered 00:06:40.977 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:40.977 00:06:40.977 00:06:40.977 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.977 http://cunit.sourceforge.net/ 00:06:40.977 00:06:40.977 00:06:40.977 Suite: components_suite 00:06:40.977 Test: vtophys_malloc_test ...passed 00:06:40.977 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:40.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.977 EAL: Restoring previous memory policy: 4 00:06:40.977 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.977 EAL: request: mp_malloc_sync 00:06:40.977 EAL: No shared files mode enabled, IPC is disabled 00:06:40.977 EAL: Heap on socket 0 was expanded by 4MB 00:06:40.977 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.977 EAL: request: mp_malloc_sync 00:06:40.977 EAL: No shared files mode enabled, IPC is disabled 00:06:40.977 EAL: Heap on socket 0 was shrunk by 4MB 00:06:40.977 EAL: Trying to obtain current memory policy. 00:06:40.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.977 EAL: Restoring previous memory policy: 4 00:06:40.977 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.977 EAL: request: mp_malloc_sync 00:06:40.977 EAL: No shared files mode enabled, IPC is disabled 00:06:40.977 EAL: Heap on socket 0 was expanded by 6MB 00:06:40.977 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.977 EAL: request: mp_malloc_sync 00:06:40.977 EAL: No shared files mode enabled, IPC is disabled 00:06:40.977 EAL: Heap on socket 0 was shrunk by 6MB 00:06:40.977 EAL: Trying to obtain current memory policy. 00:06:40.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.977 EAL: Restoring previous memory policy: 4 00:06:40.977 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.977 EAL: request: mp_malloc_sync 00:06:40.977 EAL: No shared files mode enabled, IPC is disabled 00:06:40.977 EAL: Heap on socket 0 was expanded by 10MB 00:06:40.977 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.977 EAL: request: mp_malloc_sync 00:06:40.977 EAL: No shared files mode enabled, IPC is disabled 00:06:40.977 EAL: Heap on socket 0 was shrunk by 10MB 00:06:40.977 EAL: Trying to obtain current memory policy. 00:06:40.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.977 EAL: Restoring previous memory policy: 4 00:06:40.977 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.977 EAL: request: mp_malloc_sync 00:06:40.977 EAL: No shared files mode enabled, IPC is disabled 00:06:40.977 EAL: Heap on socket 0 was expanded by 18MB 00:06:40.977 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.977 EAL: request: mp_malloc_sync 00:06:40.977 EAL: No shared files mode enabled, IPC is disabled 00:06:40.977 EAL: Heap on socket 0 was shrunk by 18MB 00:06:40.977 EAL: Trying to obtain current memory policy. 00:06:40.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.977 EAL: Restoring previous memory policy: 4 00:06:40.977 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.977 EAL: request: mp_malloc_sync 00:06:40.977 EAL: No shared files mode enabled, IPC is disabled 00:06:40.977 EAL: Heap on socket 0 was expanded by 34MB 00:06:40.977 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.977 EAL: request: mp_malloc_sync 00:06:40.977 EAL: No shared files mode enabled, IPC is disabled 00:06:40.977 EAL: Heap on socket 0 was shrunk by 34MB 00:06:40.977 EAL: Trying to obtain current memory policy. 00:06:40.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.977 EAL: Restoring previous memory policy: 4 00:06:40.977 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.977 EAL: request: mp_malloc_sync 00:06:40.977 EAL: No shared files mode enabled, IPC is disabled 00:06:40.977 EAL: Heap on socket 0 was expanded by 66MB 00:06:40.977 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.977 EAL: request: mp_malloc_sync 00:06:40.977 EAL: No shared files mode enabled, IPC is disabled 00:06:40.977 EAL: Heap on socket 0 was shrunk by 66MB 00:06:40.977 EAL: Trying to obtain current memory policy. 00:06:40.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.977 EAL: Restoring previous memory policy: 4 00:06:40.977 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.977 EAL: request: mp_malloc_sync 00:06:40.977 EAL: No shared files mode enabled, IPC is disabled 00:06:40.977 EAL: Heap on socket 0 was expanded by 130MB 00:06:40.977 EAL: Calling mem event callback 'spdk:(nil)' 00:06:41.236 EAL: request: mp_malloc_sync 00:06:41.236 EAL: No shared files mode enabled, IPC is disabled 00:06:41.236 EAL: Heap on socket 0 was shrunk by 130MB 00:06:41.236 EAL: Trying to obtain current memory policy. 00:06:41.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:41.236 EAL: Restoring previous memory policy: 4 00:06:41.236 EAL: Calling mem event callback 'spdk:(nil)' 00:06:41.236 EAL: request: mp_malloc_sync 00:06:41.236 EAL: No shared files mode enabled, IPC is disabled 00:06:41.236 EAL: Heap on socket 0 was expanded by 258MB 00:06:41.236 EAL: Calling mem event callback 'spdk:(nil)' 00:06:41.236 EAL: request: mp_malloc_sync 00:06:41.236 EAL: No shared files mode enabled, IPC is disabled 00:06:41.236 EAL: Heap on socket 0 was shrunk by 258MB 00:06:41.236 EAL: Trying to obtain current memory policy. 00:06:41.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:41.236 EAL: Restoring previous memory policy: 4 00:06:41.236 EAL: Calling mem event callback 'spdk:(nil)' 00:06:41.236 EAL: request: mp_malloc_sync 00:06:41.236 EAL: No shared files mode enabled, IPC is disabled 00:06:41.236 EAL: Heap on socket 0 was expanded by 514MB 00:06:41.495 EAL: Calling mem event callback 'spdk:(nil)' 00:06:41.495 EAL: request: mp_malloc_sync 00:06:41.495 EAL: No shared files mode enabled, IPC is disabled 00:06:41.495 EAL: Heap on socket 0 was shrunk by 514MB 00:06:41.495 EAL: Trying to obtain current memory policy. 00:06:41.495 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:41.765 EAL: Restoring previous memory policy: 4 00:06:41.765 EAL: Calling mem event callback 'spdk:(nil)' 00:06:41.765 EAL: request: mp_malloc_sync 00:06:41.765 EAL: No shared files mode enabled, IPC is disabled 00:06:41.765 EAL: Heap on socket 0 was expanded by 1026MB 00:06:41.765 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.034 passed 00:06:42.034 00:06:42.034 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.034 suites 1 1 n/a 0 0 00:06:42.034 tests 2 2 2 0 0 00:06:42.034 asserts 5323 5323 5323 0 n/a 00:06:42.034 00:06:42.034 Elapsed time = 1.022 seconds 00:06:42.034 EAL: request: mp_malloc_sync 00:06:42.034 EAL: No shared files mode enabled, IPC is disabled 00:06:42.034 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:42.034 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.034 EAL: request: mp_malloc_sync 00:06:42.034 EAL: No shared files mode enabled, IPC is disabled 00:06:42.034 EAL: Heap on socket 0 was shrunk by 2MB 00:06:42.034 EAL: No shared files mode enabled, IPC is disabled 00:06:42.034 EAL: No shared files mode enabled, IPC is disabled 00:06:42.034 EAL: No shared files mode enabled, IPC is disabled 00:06:42.034 00:06:42.034 real 0m1.229s 00:06:42.034 user 0m0.650s 00:06:42.034 sys 0m0.445s 00:06:42.034 16:57:34 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.034 ************************************ 00:06:42.034 END TEST env_vtophys 00:06:42.034 ************************************ 00:06:42.034 16:57:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:42.034 16:57:34 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:42.034 16:57:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.034 16:57:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.034 16:57:34 env -- common/autotest_common.sh@10 -- # set +x 00:06:42.034 ************************************ 00:06:42.034 START TEST env_pci 00:06:42.034 ************************************ 00:06:42.034 16:57:34 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:42.034 00:06:42.034 00:06:42.034 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.034 http://cunit.sourceforge.net/ 00:06:42.034 00:06:42.034 00:06:42.034 Suite: pci 00:06:42.034 Test: pci_hook ...[2024-07-25 16:57:34.472904] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58646 has claimed it 00:06:42.034 passed 00:06:42.034 00:06:42.034 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.034 suites 1 1 n/a 0 0 00:06:42.034 tests 1 1 1 0 0 00:06:42.034 asserts 25 25 25 0 n/a 00:06:42.034 00:06:42.034 Elapsed time = 0.003 seconds 00:06:42.034 EAL: Cannot find device (10000:00:01.0) 00:06:42.034 EAL: Failed to attach device on primary process 00:06:42.034 00:06:42.034 real 0m0.021s 00:06:42.034 user 0m0.011s 00:06:42.034 sys 0m0.010s 00:06:42.034 16:57:34 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.034 ************************************ 00:06:42.034 END TEST env_pci 00:06:42.034 ************************************ 00:06:42.034 16:57:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:42.294 16:57:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:42.294 16:57:34 env -- env/env.sh@15 -- # uname 00:06:42.294 16:57:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:42.294 16:57:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:42.294 16:57:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:42.294 16:57:34 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:42.294 16:57:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.294 16:57:34 env -- common/autotest_common.sh@10 -- # set +x 00:06:42.294 ************************************ 00:06:42.294 START TEST env_dpdk_post_init 00:06:42.294 ************************************ 00:06:42.294 16:57:34 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:42.294 EAL: Detected CPU lcores: 10 00:06:42.294 EAL: Detected NUMA nodes: 1 00:06:42.294 EAL: Detected shared linkage of DPDK 00:06:42.294 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:42.294 EAL: Selected IOVA mode 'PA' 00:06:42.294 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:42.294 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:42.294 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:42.294 Starting DPDK initialization... 00:06:42.294 Starting SPDK post initialization... 00:06:42.294 SPDK NVMe probe 00:06:42.294 Attaching to 0000:00:10.0 00:06:42.294 Attaching to 0000:00:11.0 00:06:42.294 Attached to 0000:00:10.0 00:06:42.294 Attached to 0000:00:11.0 00:06:42.294 Cleaning up... 00:06:42.294 00:06:42.294 real 0m0.187s 00:06:42.294 user 0m0.049s 00:06:42.294 sys 0m0.036s 00:06:42.294 16:57:34 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.294 16:57:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:42.294 ************************************ 00:06:42.294 END TEST env_dpdk_post_init 00:06:42.294 ************************************ 00:06:42.553 16:57:34 env -- env/env.sh@26 -- # uname 00:06:42.553 16:57:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:42.553 16:57:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:42.553 16:57:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.553 16:57:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.553 16:57:34 env -- common/autotest_common.sh@10 -- # set +x 00:06:42.553 ************************************ 00:06:42.553 START TEST env_mem_callbacks 00:06:42.553 ************************************ 00:06:42.553 16:57:34 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:42.553 EAL: Detected CPU lcores: 10 00:06:42.553 EAL: Detected NUMA nodes: 1 00:06:42.553 EAL: Detected shared linkage of DPDK 00:06:42.553 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:42.553 EAL: Selected IOVA mode 'PA' 00:06:42.553 00:06:42.553 00:06:42.553 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.553 http://cunit.sourceforge.net/ 00:06:42.553 00:06:42.553 00:06:42.553 Suite: memory 00:06:42.553 Test: test ... 00:06:42.553 register 0x200000200000 2097152 00:06:42.553 malloc 3145728 00:06:42.553 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:42.553 register 0x200000400000 4194304 00:06:42.553 buf 0x200000500000 len 3145728 PASSED 00:06:42.553 malloc 64 00:06:42.553 buf 0x2000004fff40 len 64 PASSED 00:06:42.553 malloc 4194304 00:06:42.553 register 0x200000800000 6291456 00:06:42.553 buf 0x200000a00000 len 4194304 PASSED 00:06:42.554 free 0x200000500000 3145728 00:06:42.554 free 0x2000004fff40 64 00:06:42.554 unregister 0x200000400000 4194304 PASSED 00:06:42.554 free 0x200000a00000 4194304 00:06:42.554 unregister 0x200000800000 6291456 PASSED 00:06:42.554 malloc 8388608 00:06:42.554 register 0x200000400000 10485760 00:06:42.554 buf 0x200000600000 len 8388608 PASSED 00:06:42.554 free 0x200000600000 8388608 00:06:42.554 unregister 0x200000400000 10485760 PASSED 00:06:42.554 passed 00:06:42.554 00:06:42.554 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.554 suites 1 1 n/a 0 0 00:06:42.554 tests 1 1 1 0 0 00:06:42.554 asserts 15 15 15 0 n/a 00:06:42.554 00:06:42.554 Elapsed time = 0.011 seconds 00:06:42.554 00:06:42.554 real 0m0.152s 00:06:42.554 user 0m0.020s 00:06:42.554 sys 0m0.030s 00:06:42.554 16:57:34 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.554 16:57:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:42.554 ************************************ 00:06:42.554 END TEST env_mem_callbacks 00:06:42.554 ************************************ 00:06:42.554 00:06:42.554 real 0m2.237s 00:06:42.554 user 0m1.028s 00:06:42.554 sys 0m0.863s 00:06:42.554 16:57:35 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.554 16:57:35 env -- common/autotest_common.sh@10 -- # set +x 00:06:42.554 ************************************ 00:06:42.554 END TEST env 00:06:42.554 ************************************ 00:06:42.812 16:57:35 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:42.812 16:57:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.812 16:57:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.812 16:57:35 -- common/autotest_common.sh@10 -- # set +x 00:06:42.812 ************************************ 00:06:42.812 START TEST rpc 00:06:42.812 ************************************ 00:06:42.812 16:57:35 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:42.812 * Looking for test storage... 00:06:42.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:42.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.813 16:57:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58761 00:06:42.813 16:57:35 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:42.813 16:57:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.813 16:57:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58761 00:06:42.813 16:57:35 rpc -- common/autotest_common.sh@831 -- # '[' -z 58761 ']' 00:06:42.813 16:57:35 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.813 16:57:35 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.813 16:57:35 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.813 16:57:35 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.813 16:57:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.072 [2024-07-25 16:57:35.326754] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:43.072 [2024-07-25 16:57:35.326844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58761 ] 00:06:43.072 [2024-07-25 16:57:35.457004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.331 [2024-07-25 16:57:35.593518] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:43.331 [2024-07-25 16:57:35.593580] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58761' to capture a snapshot of events at runtime. 00:06:43.331 [2024-07-25 16:57:35.593591] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.331 [2024-07-25 16:57:35.593600] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.331 [2024-07-25 16:57:35.593607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58761 for offline analysis/debug. 00:06:43.331 [2024-07-25 16:57:35.593638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.899 16:57:36 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.899 16:57:36 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:43.899 16:57:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:43.899 16:57:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:43.899 16:57:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:43.899 16:57:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:43.899 16:57:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.899 16:57:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.899 16:57:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.899 ************************************ 00:06:43.899 START TEST rpc_integrity 00:06:43.899 ************************************ 00:06:43.899 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:43.899 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:43.899 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.899 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:43.899 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.899 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:43.899 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:43.899 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:43.899 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:43.899 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.899 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:43.899 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.899 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:43.899 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:43.899 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.899 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:43.899 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.899 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:43.899 { 00:06:43.899 "name": "Malloc0", 00:06:43.899 "aliases": [ 00:06:43.899 "f2e572c3-b348-4b90-a5c3-95f49770d2a3" 00:06:43.899 ], 00:06:43.899 "product_name": "Malloc disk", 00:06:43.899 "block_size": 512, 00:06:43.899 "num_blocks": 16384, 00:06:43.899 "uuid": "f2e572c3-b348-4b90-a5c3-95f49770d2a3", 00:06:43.900 "assigned_rate_limits": { 00:06:43.900 "rw_ios_per_sec": 0, 00:06:43.900 "rw_mbytes_per_sec": 0, 00:06:43.900 "r_mbytes_per_sec": 0, 00:06:43.900 "w_mbytes_per_sec": 0 00:06:43.900 }, 00:06:43.900 "claimed": false, 00:06:43.900 "zoned": false, 00:06:43.900 "supported_io_types": { 00:06:43.900 "read": true, 00:06:43.900 "write": true, 00:06:43.900 "unmap": true, 00:06:43.900 "flush": true, 00:06:43.900 "reset": true, 00:06:43.900 "nvme_admin": false, 00:06:43.900 "nvme_io": false, 00:06:43.900 "nvme_io_md": false, 00:06:43.900 "write_zeroes": true, 00:06:43.900 "zcopy": true, 00:06:43.900 "get_zone_info": false, 00:06:43.900 "zone_management": false, 00:06:43.900 "zone_append": false, 00:06:43.900 "compare": false, 00:06:43.900 "compare_and_write": false, 00:06:43.900 "abort": true, 00:06:43.900 "seek_hole": false, 00:06:43.900 "seek_data": false, 00:06:43.900 "copy": true, 00:06:43.900 "nvme_iov_md": false 00:06:43.900 }, 00:06:43.900 "memory_domains": [ 00:06:43.900 { 00:06:43.900 "dma_device_id": "system", 00:06:43.900 "dma_device_type": 1 00:06:43.900 }, 00:06:43.900 { 00:06:43.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.900 "dma_device_type": 2 00:06:43.900 } 00:06:43.900 ], 00:06:43.900 "driver_specific": {} 00:06:43.900 } 00:06:43.900 ]' 00:06:43.900 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:43.900 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:43.900 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:43.900 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.900 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:43.900 [2024-07-25 16:57:36.317069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:43.900 [2024-07-25 16:57:36.317133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.900 [2024-07-25 16:57:36.317174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1764430 00:06:43.900 [2024-07-25 16:57:36.317189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.900 [2024-07-25 16:57:36.318702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.900 [2024-07-25 16:57:36.318742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:43.900 Passthru0 00:06:43.900 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.900 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:43.900 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.900 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:43.900 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.900 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:43.900 { 00:06:43.900 "name": "Malloc0", 00:06:43.900 "aliases": [ 00:06:43.900 "f2e572c3-b348-4b90-a5c3-95f49770d2a3" 00:06:43.900 ], 00:06:43.900 "product_name": "Malloc disk", 00:06:43.900 "block_size": 512, 00:06:43.900 "num_blocks": 16384, 00:06:43.900 "uuid": "f2e572c3-b348-4b90-a5c3-95f49770d2a3", 00:06:43.900 "assigned_rate_limits": { 00:06:43.900 "rw_ios_per_sec": 0, 00:06:43.900 "rw_mbytes_per_sec": 0, 00:06:43.900 "r_mbytes_per_sec": 0, 00:06:43.900 "w_mbytes_per_sec": 0 00:06:43.900 }, 00:06:43.900 "claimed": true, 00:06:43.900 "claim_type": "exclusive_write", 00:06:43.900 "zoned": false, 00:06:43.900 "supported_io_types": { 00:06:43.900 "read": true, 00:06:43.900 "write": true, 00:06:43.900 "unmap": true, 00:06:43.900 "flush": true, 00:06:43.900 "reset": true, 00:06:43.900 "nvme_admin": false, 00:06:43.900 "nvme_io": false, 00:06:43.900 "nvme_io_md": false, 00:06:43.900 "write_zeroes": true, 00:06:43.900 "zcopy": true, 00:06:43.900 "get_zone_info": false, 00:06:43.900 "zone_management": false, 00:06:43.900 "zone_append": false, 00:06:43.900 "compare": false, 00:06:43.900 "compare_and_write": false, 00:06:43.900 "abort": true, 00:06:43.900 "seek_hole": false, 00:06:43.900 "seek_data": false, 00:06:43.900 "copy": true, 00:06:43.900 "nvme_iov_md": false 00:06:43.900 }, 00:06:43.900 "memory_domains": [ 00:06:43.900 { 00:06:43.900 "dma_device_id": "system", 00:06:43.900 "dma_device_type": 1 00:06:43.900 }, 00:06:43.900 { 00:06:43.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.900 "dma_device_type": 2 00:06:43.900 } 00:06:43.900 ], 00:06:43.900 "driver_specific": {} 00:06:43.900 }, 00:06:43.900 { 00:06:43.900 "name": "Passthru0", 00:06:43.900 "aliases": [ 00:06:43.900 "cea4cfad-725f-54b9-ab77-ec0475fc936f" 00:06:43.900 ], 00:06:43.900 "product_name": "passthru", 00:06:43.900 "block_size": 512, 00:06:43.900 "num_blocks": 16384, 00:06:43.900 "uuid": "cea4cfad-725f-54b9-ab77-ec0475fc936f", 00:06:43.900 "assigned_rate_limits": { 00:06:43.900 "rw_ios_per_sec": 0, 00:06:43.900 "rw_mbytes_per_sec": 0, 00:06:43.900 "r_mbytes_per_sec": 0, 00:06:43.900 "w_mbytes_per_sec": 0 00:06:43.900 }, 00:06:43.900 "claimed": false, 00:06:43.900 "zoned": false, 00:06:43.900 "supported_io_types": { 00:06:43.900 "read": true, 00:06:43.900 "write": true, 00:06:43.900 "unmap": true, 00:06:43.900 "flush": true, 00:06:43.900 "reset": true, 00:06:43.900 "nvme_admin": false, 00:06:43.900 "nvme_io": false, 00:06:43.900 "nvme_io_md": false, 00:06:43.900 "write_zeroes": true, 00:06:43.900 "zcopy": true, 00:06:43.900 "get_zone_info": false, 00:06:43.900 "zone_management": false, 00:06:43.900 "zone_append": false, 00:06:43.900 "compare": false, 00:06:43.900 "compare_and_write": false, 00:06:43.900 "abort": true, 00:06:43.900 "seek_hole": false, 00:06:43.900 "seek_data": false, 00:06:43.900 "copy": true, 00:06:43.900 "nvme_iov_md": false 00:06:43.900 }, 00:06:43.900 "memory_domains": [ 00:06:43.900 { 00:06:43.900 "dma_device_id": "system", 00:06:43.900 "dma_device_type": 1 00:06:43.900 }, 00:06:43.900 { 00:06:43.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.900 "dma_device_type": 2 00:06:43.900 } 00:06:43.900 ], 00:06:43.900 "driver_specific": { 00:06:43.900 "passthru": { 00:06:43.900 "name": "Passthru0", 00:06:43.900 "base_bdev_name": "Malloc0" 00:06:43.900 } 00:06:43.900 } 00:06:43.900 } 00:06:43.900 ]' 00:06:43.900 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:44.159 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:44.159 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:44.159 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.159 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.159 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.159 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:44.159 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.159 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.159 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.159 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:44.159 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.159 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.159 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.159 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:44.159 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:44.159 ************************************ 00:06:44.159 END TEST rpc_integrity 00:06:44.159 ************************************ 00:06:44.159 16:57:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:44.159 00:06:44.159 real 0m0.299s 00:06:44.159 user 0m0.180s 00:06:44.159 sys 0m0.042s 00:06:44.160 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.160 16:57:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.160 16:57:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:44.160 16:57:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.160 16:57:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.160 16:57:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.160 ************************************ 00:06:44.160 START TEST rpc_plugins 00:06:44.160 ************************************ 00:06:44.160 16:57:36 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:44.160 16:57:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:44.160 16:57:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.160 16:57:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:44.160 16:57:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.160 16:57:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:44.160 16:57:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:44.160 16:57:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.160 16:57:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:44.160 16:57:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.160 16:57:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:44.160 { 00:06:44.160 "name": "Malloc1", 00:06:44.160 "aliases": [ 00:06:44.160 "18eb5136-b662-4091-8495-948e94426ef5" 00:06:44.160 ], 00:06:44.160 "product_name": "Malloc disk", 00:06:44.160 "block_size": 4096, 00:06:44.160 "num_blocks": 256, 00:06:44.160 "uuid": "18eb5136-b662-4091-8495-948e94426ef5", 00:06:44.160 "assigned_rate_limits": { 00:06:44.160 "rw_ios_per_sec": 0, 00:06:44.160 "rw_mbytes_per_sec": 0, 00:06:44.160 "r_mbytes_per_sec": 0, 00:06:44.160 "w_mbytes_per_sec": 0 00:06:44.160 }, 00:06:44.160 "claimed": false, 00:06:44.160 "zoned": false, 00:06:44.160 "supported_io_types": { 00:06:44.160 "read": true, 00:06:44.160 "write": true, 00:06:44.160 "unmap": true, 00:06:44.160 "flush": true, 00:06:44.160 "reset": true, 00:06:44.160 "nvme_admin": false, 00:06:44.160 "nvme_io": false, 00:06:44.160 "nvme_io_md": false, 00:06:44.160 "write_zeroes": true, 00:06:44.160 "zcopy": true, 00:06:44.160 "get_zone_info": false, 00:06:44.160 "zone_management": false, 00:06:44.160 "zone_append": false, 00:06:44.160 "compare": false, 00:06:44.160 "compare_and_write": false, 00:06:44.160 "abort": true, 00:06:44.160 "seek_hole": false, 00:06:44.160 "seek_data": false, 00:06:44.160 "copy": true, 00:06:44.160 "nvme_iov_md": false 00:06:44.160 }, 00:06:44.160 "memory_domains": [ 00:06:44.160 { 00:06:44.160 "dma_device_id": "system", 00:06:44.160 "dma_device_type": 1 00:06:44.160 }, 00:06:44.160 { 00:06:44.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.160 "dma_device_type": 2 00:06:44.160 } 00:06:44.160 ], 00:06:44.160 "driver_specific": {} 00:06:44.160 } 00:06:44.160 ]' 00:06:44.160 16:57:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:44.160 16:57:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:44.160 16:57:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:44.160 16:57:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.160 16:57:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:44.418 16:57:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.418 16:57:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:44.418 16:57:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.418 16:57:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:44.418 16:57:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.418 16:57:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:44.418 16:57:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:44.418 ************************************ 00:06:44.418 END TEST rpc_plugins 00:06:44.418 ************************************ 00:06:44.418 16:57:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:44.418 00:06:44.418 real 0m0.156s 00:06:44.418 user 0m0.090s 00:06:44.418 sys 0m0.025s 00:06:44.418 16:57:36 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.418 16:57:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:44.418 16:57:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:44.418 16:57:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.418 16:57:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.418 16:57:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.418 ************************************ 00:06:44.418 START TEST rpc_trace_cmd_test 00:06:44.418 ************************************ 00:06:44.418 16:57:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:44.418 16:57:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:44.418 16:57:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:44.418 16:57:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.418 16:57:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.418 16:57:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.418 16:57:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:44.418 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58761", 00:06:44.418 "tpoint_group_mask": "0x8", 00:06:44.418 "iscsi_conn": { 00:06:44.418 "mask": "0x2", 00:06:44.418 "tpoint_mask": "0x0" 00:06:44.418 }, 00:06:44.418 "scsi": { 00:06:44.418 "mask": "0x4", 00:06:44.418 "tpoint_mask": "0x0" 00:06:44.418 }, 00:06:44.418 "bdev": { 00:06:44.418 "mask": "0x8", 00:06:44.418 "tpoint_mask": "0xffffffffffffffff" 00:06:44.418 }, 00:06:44.418 "nvmf_rdma": { 00:06:44.418 "mask": "0x10", 00:06:44.418 "tpoint_mask": "0x0" 00:06:44.418 }, 00:06:44.418 "nvmf_tcp": { 00:06:44.418 "mask": "0x20", 00:06:44.418 "tpoint_mask": "0x0" 00:06:44.418 }, 00:06:44.418 "ftl": { 00:06:44.418 "mask": "0x40", 00:06:44.418 "tpoint_mask": "0x0" 00:06:44.418 }, 00:06:44.418 "blobfs": { 00:06:44.418 "mask": "0x80", 00:06:44.418 "tpoint_mask": "0x0" 00:06:44.418 }, 00:06:44.418 "dsa": { 00:06:44.418 "mask": "0x200", 00:06:44.418 "tpoint_mask": "0x0" 00:06:44.418 }, 00:06:44.418 "thread": { 00:06:44.418 "mask": "0x400", 00:06:44.418 "tpoint_mask": "0x0" 00:06:44.418 }, 00:06:44.418 "nvme_pcie": { 00:06:44.418 "mask": "0x800", 00:06:44.418 "tpoint_mask": "0x0" 00:06:44.418 }, 00:06:44.418 "iaa": { 00:06:44.418 "mask": "0x1000", 00:06:44.418 "tpoint_mask": "0x0" 00:06:44.418 }, 00:06:44.418 "nvme_tcp": { 00:06:44.418 "mask": "0x2000", 00:06:44.418 "tpoint_mask": "0x0" 00:06:44.418 }, 00:06:44.418 "bdev_nvme": { 00:06:44.418 "mask": "0x4000", 00:06:44.418 "tpoint_mask": "0x0" 00:06:44.418 }, 00:06:44.418 "sock": { 00:06:44.418 "mask": "0x8000", 00:06:44.418 "tpoint_mask": "0x0" 00:06:44.418 } 00:06:44.418 }' 00:06:44.418 16:57:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:44.418 16:57:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:44.418 16:57:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:44.418 16:57:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:44.677 16:57:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:44.677 16:57:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:44.677 16:57:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:44.677 16:57:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:44.677 16:57:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:44.677 ************************************ 00:06:44.677 END TEST rpc_trace_cmd_test 00:06:44.677 ************************************ 00:06:44.677 16:57:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:44.677 00:06:44.677 real 0m0.266s 00:06:44.677 user 0m0.205s 00:06:44.677 sys 0m0.044s 00:06:44.677 16:57:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.677 16:57:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.677 16:57:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:44.677 16:57:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:44.677 16:57:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:44.677 16:57:37 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.677 16:57:37 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.677 16:57:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.677 ************************************ 00:06:44.677 START TEST rpc_daemon_integrity 00:06:44.677 ************************************ 00:06:44.677 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:44.677 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:44.677 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.677 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.677 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.677 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:44.677 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:44.952 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:44.952 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:44.952 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.952 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:44.953 { 00:06:44.953 "name": "Malloc2", 00:06:44.953 "aliases": [ 00:06:44.953 "78a51dd8-23db-453b-a6ad-c747efd89fa4" 00:06:44.953 ], 00:06:44.953 "product_name": "Malloc disk", 00:06:44.953 "block_size": 512, 00:06:44.953 "num_blocks": 16384, 00:06:44.953 "uuid": "78a51dd8-23db-453b-a6ad-c747efd89fa4", 00:06:44.953 "assigned_rate_limits": { 00:06:44.953 "rw_ios_per_sec": 0, 00:06:44.953 "rw_mbytes_per_sec": 0, 00:06:44.953 "r_mbytes_per_sec": 0, 00:06:44.953 "w_mbytes_per_sec": 0 00:06:44.953 }, 00:06:44.953 "claimed": false, 00:06:44.953 "zoned": false, 00:06:44.953 "supported_io_types": { 00:06:44.953 "read": true, 00:06:44.953 "write": true, 00:06:44.953 "unmap": true, 00:06:44.953 "flush": true, 00:06:44.953 "reset": true, 00:06:44.953 "nvme_admin": false, 00:06:44.953 "nvme_io": false, 00:06:44.953 "nvme_io_md": false, 00:06:44.953 "write_zeroes": true, 00:06:44.953 "zcopy": true, 00:06:44.953 "get_zone_info": false, 00:06:44.953 "zone_management": false, 00:06:44.953 "zone_append": false, 00:06:44.953 "compare": false, 00:06:44.953 "compare_and_write": false, 00:06:44.953 "abort": true, 00:06:44.953 "seek_hole": false, 00:06:44.953 "seek_data": false, 00:06:44.953 "copy": true, 00:06:44.953 "nvme_iov_md": false 00:06:44.953 }, 00:06:44.953 "memory_domains": [ 00:06:44.953 { 00:06:44.953 "dma_device_id": "system", 00:06:44.953 "dma_device_type": 1 00:06:44.953 }, 00:06:44.953 { 00:06:44.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.953 "dma_device_type": 2 00:06:44.953 } 00:06:44.953 ], 00:06:44.953 "driver_specific": {} 00:06:44.953 } 00:06:44.953 ]' 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.953 [2024-07-25 16:57:37.240135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:44.953 [2024-07-25 16:57:37.240310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.953 [2024-07-25 16:57:37.240376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1765370 00:06:44.953 [2024-07-25 16:57:37.240409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.953 [2024-07-25 16:57:37.243605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.953 [2024-07-25 16:57:37.243669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:44.953 Passthru0 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:44.953 { 00:06:44.953 "name": "Malloc2", 00:06:44.953 "aliases": [ 00:06:44.953 "78a51dd8-23db-453b-a6ad-c747efd89fa4" 00:06:44.953 ], 00:06:44.953 "product_name": "Malloc disk", 00:06:44.953 "block_size": 512, 00:06:44.953 "num_blocks": 16384, 00:06:44.953 "uuid": "78a51dd8-23db-453b-a6ad-c747efd89fa4", 00:06:44.953 "assigned_rate_limits": { 00:06:44.953 "rw_ios_per_sec": 0, 00:06:44.953 "rw_mbytes_per_sec": 0, 00:06:44.953 "r_mbytes_per_sec": 0, 00:06:44.953 "w_mbytes_per_sec": 0 00:06:44.953 }, 00:06:44.953 "claimed": true, 00:06:44.953 "claim_type": "exclusive_write", 00:06:44.953 "zoned": false, 00:06:44.953 "supported_io_types": { 00:06:44.953 "read": true, 00:06:44.953 "write": true, 00:06:44.953 "unmap": true, 00:06:44.953 "flush": true, 00:06:44.953 "reset": true, 00:06:44.953 "nvme_admin": false, 00:06:44.953 "nvme_io": false, 00:06:44.953 "nvme_io_md": false, 00:06:44.953 "write_zeroes": true, 00:06:44.953 "zcopy": true, 00:06:44.953 "get_zone_info": false, 00:06:44.953 "zone_management": false, 00:06:44.953 "zone_append": false, 00:06:44.953 "compare": false, 00:06:44.953 "compare_and_write": false, 00:06:44.953 "abort": true, 00:06:44.953 "seek_hole": false, 00:06:44.953 "seek_data": false, 00:06:44.953 "copy": true, 00:06:44.953 "nvme_iov_md": false 00:06:44.953 }, 00:06:44.953 "memory_domains": [ 00:06:44.953 { 00:06:44.953 "dma_device_id": "system", 00:06:44.953 "dma_device_type": 1 00:06:44.953 }, 00:06:44.953 { 00:06:44.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.953 "dma_device_type": 2 00:06:44.953 } 00:06:44.953 ], 00:06:44.953 "driver_specific": {} 00:06:44.953 }, 00:06:44.953 { 00:06:44.953 "name": "Passthru0", 00:06:44.953 "aliases": [ 00:06:44.953 "0464c276-1e12-53ab-abf3-e0819b797047" 00:06:44.953 ], 00:06:44.953 "product_name": "passthru", 00:06:44.953 "block_size": 512, 00:06:44.953 "num_blocks": 16384, 00:06:44.953 "uuid": "0464c276-1e12-53ab-abf3-e0819b797047", 00:06:44.953 "assigned_rate_limits": { 00:06:44.953 "rw_ios_per_sec": 0, 00:06:44.953 "rw_mbytes_per_sec": 0, 00:06:44.953 "r_mbytes_per_sec": 0, 00:06:44.953 "w_mbytes_per_sec": 0 00:06:44.953 }, 00:06:44.953 "claimed": false, 00:06:44.953 "zoned": false, 00:06:44.953 "supported_io_types": { 00:06:44.953 "read": true, 00:06:44.953 "write": true, 00:06:44.953 "unmap": true, 00:06:44.953 "flush": true, 00:06:44.953 "reset": true, 00:06:44.953 "nvme_admin": false, 00:06:44.953 "nvme_io": false, 00:06:44.953 "nvme_io_md": false, 00:06:44.953 "write_zeroes": true, 00:06:44.953 "zcopy": true, 00:06:44.953 "get_zone_info": false, 00:06:44.953 "zone_management": false, 00:06:44.953 "zone_append": false, 00:06:44.953 "compare": false, 00:06:44.953 "compare_and_write": false, 00:06:44.953 "abort": true, 00:06:44.953 "seek_hole": false, 00:06:44.953 "seek_data": false, 00:06:44.953 "copy": true, 00:06:44.953 "nvme_iov_md": false 00:06:44.953 }, 00:06:44.953 "memory_domains": [ 00:06:44.953 { 00:06:44.953 "dma_device_id": "system", 00:06:44.953 "dma_device_type": 1 00:06:44.953 }, 00:06:44.953 { 00:06:44.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.953 "dma_device_type": 2 00:06:44.953 } 00:06:44.953 ], 00:06:44.953 "driver_specific": { 00:06:44.953 "passthru": { 00:06:44.953 "name": "Passthru0", 00:06:44.953 "base_bdev_name": "Malloc2" 00:06:44.953 } 00:06:44.953 } 00:06:44.953 } 00:06:44.953 ]' 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:44.953 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.954 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.954 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.954 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:44.954 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.954 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:44.954 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.954 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:44.954 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:44.954 ************************************ 00:06:44.954 END TEST rpc_daemon_integrity 00:06:44.954 ************************************ 00:06:44.954 16:57:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:44.954 00:06:44.954 real 0m0.314s 00:06:44.954 user 0m0.182s 00:06:44.954 sys 0m0.065s 00:06:44.954 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.954 16:57:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:45.213 16:57:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:45.213 16:57:37 rpc -- rpc/rpc.sh@84 -- # killprocess 58761 00:06:45.213 16:57:37 rpc -- common/autotest_common.sh@950 -- # '[' -z 58761 ']' 00:06:45.213 16:57:37 rpc -- common/autotest_common.sh@954 -- # kill -0 58761 00:06:45.213 16:57:37 rpc -- common/autotest_common.sh@955 -- # uname 00:06:45.213 16:57:37 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.213 16:57:37 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58761 00:06:45.213 killing process with pid 58761 00:06:45.213 16:57:37 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.213 16:57:37 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.213 16:57:37 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58761' 00:06:45.213 16:57:37 rpc -- common/autotest_common.sh@969 -- # kill 58761 00:06:45.213 16:57:37 rpc -- common/autotest_common.sh@974 -- # wait 58761 00:06:45.779 00:06:45.779 real 0m2.969s 00:06:45.779 user 0m3.587s 00:06:45.779 sys 0m0.796s 00:06:45.779 16:57:38 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.779 ************************************ 00:06:45.779 END TEST rpc 00:06:45.779 ************************************ 00:06:45.779 16:57:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.779 16:57:38 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:45.779 16:57:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.779 16:57:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.779 16:57:38 -- common/autotest_common.sh@10 -- # set +x 00:06:45.779 ************************************ 00:06:45.779 START TEST skip_rpc 00:06:45.779 ************************************ 00:06:45.779 16:57:38 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:45.779 * Looking for test storage... 00:06:45.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:45.779 16:57:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:45.779 16:57:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:45.779 16:57:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:45.779 16:57:38 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.779 16:57:38 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.779 16:57:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.038 ************************************ 00:06:46.038 START TEST skip_rpc 00:06:46.038 ************************************ 00:06:46.038 16:57:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:46.038 16:57:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:46.038 16:57:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58959 00:06:46.038 16:57:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:46.038 16:57:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:46.038 [2024-07-25 16:57:38.333634] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:46.038 [2024-07-25 16:57:38.334009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58959 ] 00:06:46.038 [2024-07-25 16:57:38.468041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.296 [2024-07-25 16:57:38.615517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58959 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 58959 ']' 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 58959 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58959 00:06:51.567 killing process with pid 58959 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58959' 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 58959 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 58959 00:06:51.567 ************************************ 00:06:51.567 END TEST skip_rpc 00:06:51.567 ************************************ 00:06:51.567 00:06:51.567 real 0m5.379s 00:06:51.567 user 0m4.912s 00:06:51.567 sys 0m0.383s 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.567 16:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.567 16:57:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:51.567 16:57:43 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.567 16:57:43 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.567 16:57:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.567 ************************************ 00:06:51.567 START TEST skip_rpc_with_json 00:06:51.567 ************************************ 00:06:51.567 16:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:51.567 16:57:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:51.567 16:57:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59040 00:06:51.567 16:57:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:51.567 16:57:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.567 16:57:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59040 00:06:51.567 16:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 59040 ']' 00:06:51.567 16:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.567 16:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.567 16:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.568 16:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.568 16:57:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:51.568 [2024-07-25 16:57:43.782033] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:51.568 [2024-07-25 16:57:43.782130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59040 ] 00:06:51.568 [2024-07-25 16:57:43.921071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.568 [2024-07-25 16:57:44.023979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.503 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.503 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:52.503 16:57:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:52.503 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.503 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:52.503 [2024-07-25 16:57:44.636849] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:52.503 request: 00:06:52.503 { 00:06:52.503 "trtype": "tcp", 00:06:52.503 "method": "nvmf_get_transports", 00:06:52.503 "req_id": 1 00:06:52.503 } 00:06:52.503 Got JSON-RPC error response 00:06:52.503 response: 00:06:52.503 { 00:06:52.503 "code": -19, 00:06:52.503 "message": "No such device" 00:06:52.503 } 00:06:52.503 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:52.503 16:57:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:52.503 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.503 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:52.503 [2024-07-25 16:57:44.652920] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.503 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.503 16:57:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:52.503 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.503 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:52.503 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.503 16:57:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:52.503 { 00:06:52.503 "subsystems": [ 00:06:52.503 { 00:06:52.503 "subsystem": "keyring", 00:06:52.503 "config": [] 00:06:52.503 }, 00:06:52.503 { 00:06:52.503 "subsystem": "iobuf", 00:06:52.503 "config": [ 00:06:52.503 { 00:06:52.503 "method": "iobuf_set_options", 00:06:52.503 "params": { 00:06:52.503 "small_pool_count": 8192, 00:06:52.503 "large_pool_count": 1024, 00:06:52.503 "small_bufsize": 8192, 00:06:52.503 "large_bufsize": 135168 00:06:52.503 } 00:06:52.503 } 00:06:52.503 ] 00:06:52.503 }, 00:06:52.503 { 00:06:52.503 "subsystem": "sock", 00:06:52.503 "config": [ 00:06:52.503 { 00:06:52.503 "method": "sock_set_default_impl", 00:06:52.503 "params": { 00:06:52.503 "impl_name": "posix" 00:06:52.503 } 00:06:52.503 }, 00:06:52.503 { 00:06:52.503 "method": "sock_impl_set_options", 00:06:52.503 "params": { 00:06:52.503 "impl_name": "ssl", 00:06:52.503 "recv_buf_size": 4096, 00:06:52.503 "send_buf_size": 4096, 00:06:52.503 "enable_recv_pipe": true, 00:06:52.503 "enable_quickack": false, 00:06:52.503 "enable_placement_id": 0, 00:06:52.503 "enable_zerocopy_send_server": true, 00:06:52.503 "enable_zerocopy_send_client": false, 00:06:52.503 "zerocopy_threshold": 0, 00:06:52.503 "tls_version": 0, 00:06:52.503 "enable_ktls": false 00:06:52.503 } 00:06:52.503 }, 00:06:52.503 { 00:06:52.503 "method": "sock_impl_set_options", 00:06:52.503 "params": { 00:06:52.503 "impl_name": "posix", 00:06:52.503 "recv_buf_size": 2097152, 00:06:52.503 "send_buf_size": 2097152, 00:06:52.503 "enable_recv_pipe": true, 00:06:52.503 "enable_quickack": false, 00:06:52.503 "enable_placement_id": 0, 00:06:52.503 "enable_zerocopy_send_server": true, 00:06:52.503 "enable_zerocopy_send_client": false, 00:06:52.503 "zerocopy_threshold": 0, 00:06:52.503 "tls_version": 0, 00:06:52.503 "enable_ktls": false 00:06:52.503 } 00:06:52.503 } 00:06:52.503 ] 00:06:52.503 }, 00:06:52.503 { 00:06:52.503 "subsystem": "vmd", 00:06:52.503 "config": [] 00:06:52.503 }, 00:06:52.503 { 00:06:52.503 "subsystem": "accel", 00:06:52.503 "config": [ 00:06:52.503 { 00:06:52.503 "method": "accel_set_options", 00:06:52.503 "params": { 00:06:52.503 "small_cache_size": 128, 00:06:52.503 "large_cache_size": 16, 00:06:52.503 "task_count": 2048, 00:06:52.503 "sequence_count": 2048, 00:06:52.503 "buf_count": 2048 00:06:52.503 } 00:06:52.503 } 00:06:52.503 ] 00:06:52.503 }, 00:06:52.503 { 00:06:52.503 "subsystem": "bdev", 00:06:52.503 "config": [ 00:06:52.503 { 00:06:52.503 "method": "bdev_set_options", 00:06:52.503 "params": { 00:06:52.503 "bdev_io_pool_size": 65535, 00:06:52.503 "bdev_io_cache_size": 256, 00:06:52.503 "bdev_auto_examine": true, 00:06:52.503 "iobuf_small_cache_size": 128, 00:06:52.503 "iobuf_large_cache_size": 16 00:06:52.503 } 00:06:52.503 }, 00:06:52.503 { 00:06:52.503 "method": "bdev_raid_set_options", 00:06:52.503 "params": { 00:06:52.503 "process_window_size_kb": 1024, 00:06:52.503 "process_max_bandwidth_mb_sec": 0 00:06:52.503 } 00:06:52.503 }, 00:06:52.503 { 00:06:52.503 "method": "bdev_iscsi_set_options", 00:06:52.503 "params": { 00:06:52.503 "timeout_sec": 30 00:06:52.503 } 00:06:52.503 }, 00:06:52.503 { 00:06:52.503 "method": "bdev_nvme_set_options", 00:06:52.503 "params": { 00:06:52.503 "action_on_timeout": "none", 00:06:52.503 "timeout_us": 0, 00:06:52.503 "timeout_admin_us": 0, 00:06:52.503 "keep_alive_timeout_ms": 10000, 00:06:52.503 "arbitration_burst": 0, 00:06:52.503 "low_priority_weight": 0, 00:06:52.503 "medium_priority_weight": 0, 00:06:52.503 "high_priority_weight": 0, 00:06:52.503 "nvme_adminq_poll_period_us": 10000, 00:06:52.503 "nvme_ioq_poll_period_us": 0, 00:06:52.503 "io_queue_requests": 0, 00:06:52.503 "delay_cmd_submit": true, 00:06:52.503 "transport_retry_count": 4, 00:06:52.503 "bdev_retry_count": 3, 00:06:52.503 "transport_ack_timeout": 0, 00:06:52.503 "ctrlr_loss_timeout_sec": 0, 00:06:52.503 "reconnect_delay_sec": 0, 00:06:52.503 "fast_io_fail_timeout_sec": 0, 00:06:52.503 "disable_auto_failback": false, 00:06:52.503 "generate_uuids": false, 00:06:52.503 "transport_tos": 0, 00:06:52.503 "nvme_error_stat": false, 00:06:52.503 "rdma_srq_size": 0, 00:06:52.503 "io_path_stat": false, 00:06:52.503 "allow_accel_sequence": false, 00:06:52.503 "rdma_max_cq_size": 0, 00:06:52.503 "rdma_cm_event_timeout_ms": 0, 00:06:52.503 "dhchap_digests": [ 00:06:52.503 "sha256", 00:06:52.503 "sha384", 00:06:52.503 "sha512" 00:06:52.503 ], 00:06:52.503 "dhchap_dhgroups": [ 00:06:52.503 "null", 00:06:52.503 "ffdhe2048", 00:06:52.503 "ffdhe3072", 00:06:52.503 "ffdhe4096", 00:06:52.503 "ffdhe6144", 00:06:52.503 "ffdhe8192" 00:06:52.503 ] 00:06:52.503 } 00:06:52.503 }, 00:06:52.503 { 00:06:52.503 "method": "bdev_nvme_set_hotplug", 00:06:52.503 "params": { 00:06:52.503 "period_us": 100000, 00:06:52.503 "enable": false 00:06:52.503 } 00:06:52.503 }, 00:06:52.503 { 00:06:52.503 "method": "bdev_wait_for_examine" 00:06:52.503 } 00:06:52.503 ] 00:06:52.503 }, 00:06:52.503 { 00:06:52.503 "subsystem": "scsi", 00:06:52.503 "config": null 00:06:52.503 }, 00:06:52.503 { 00:06:52.503 "subsystem": "scheduler", 00:06:52.503 "config": [ 00:06:52.503 { 00:06:52.503 "method": "framework_set_scheduler", 00:06:52.503 "params": { 00:06:52.503 "name": "static" 00:06:52.503 } 00:06:52.503 } 00:06:52.503 ] 00:06:52.503 }, 00:06:52.503 { 00:06:52.503 "subsystem": "vhost_scsi", 00:06:52.503 "config": [] 00:06:52.503 }, 00:06:52.503 { 00:06:52.503 "subsystem": "vhost_blk", 00:06:52.504 "config": [] 00:06:52.504 }, 00:06:52.504 { 00:06:52.504 "subsystem": "ublk", 00:06:52.504 "config": [] 00:06:52.504 }, 00:06:52.504 { 00:06:52.504 "subsystem": "nbd", 00:06:52.504 "config": [] 00:06:52.504 }, 00:06:52.504 { 00:06:52.504 "subsystem": "nvmf", 00:06:52.504 "config": [ 00:06:52.504 { 00:06:52.504 "method": "nvmf_set_config", 00:06:52.504 "params": { 00:06:52.504 "discovery_filter": "match_any", 00:06:52.504 "admin_cmd_passthru": { 00:06:52.504 "identify_ctrlr": false 00:06:52.504 } 00:06:52.504 } 00:06:52.504 }, 00:06:52.504 { 00:06:52.504 "method": "nvmf_set_max_subsystems", 00:06:52.504 "params": { 00:06:52.504 "max_subsystems": 1024 00:06:52.504 } 00:06:52.504 }, 00:06:52.504 { 00:06:52.504 "method": "nvmf_set_crdt", 00:06:52.504 "params": { 00:06:52.504 "crdt1": 0, 00:06:52.504 "crdt2": 0, 00:06:52.504 "crdt3": 0 00:06:52.504 } 00:06:52.504 }, 00:06:52.504 { 00:06:52.504 "method": "nvmf_create_transport", 00:06:52.504 "params": { 00:06:52.504 "trtype": "TCP", 00:06:52.504 "max_queue_depth": 128, 00:06:52.504 "max_io_qpairs_per_ctrlr": 127, 00:06:52.504 "in_capsule_data_size": 4096, 00:06:52.504 "max_io_size": 131072, 00:06:52.504 "io_unit_size": 131072, 00:06:52.504 "max_aq_depth": 128, 00:06:52.504 "num_shared_buffers": 511, 00:06:52.504 "buf_cache_size": 4294967295, 00:06:52.504 "dif_insert_or_strip": false, 00:06:52.504 "zcopy": false, 00:06:52.504 "c2h_success": true, 00:06:52.504 "sock_priority": 0, 00:06:52.504 "abort_timeout_sec": 1, 00:06:52.504 "ack_timeout": 0, 00:06:52.504 "data_wr_pool_size": 0 00:06:52.504 } 00:06:52.504 } 00:06:52.504 ] 00:06:52.504 }, 00:06:52.504 { 00:06:52.504 "subsystem": "iscsi", 00:06:52.504 "config": [ 00:06:52.504 { 00:06:52.504 "method": "iscsi_set_options", 00:06:52.504 "params": { 00:06:52.504 "node_base": "iqn.2016-06.io.spdk", 00:06:52.504 "max_sessions": 128, 00:06:52.504 "max_connections_per_session": 2, 00:06:52.504 "max_queue_depth": 64, 00:06:52.504 "default_time2wait": 2, 00:06:52.504 "default_time2retain": 20, 00:06:52.504 "first_burst_length": 8192, 00:06:52.504 "immediate_data": true, 00:06:52.504 "allow_duplicated_isid": false, 00:06:52.504 "error_recovery_level": 0, 00:06:52.504 "nop_timeout": 60, 00:06:52.504 "nop_in_interval": 30, 00:06:52.504 "disable_chap": false, 00:06:52.504 "require_chap": false, 00:06:52.504 "mutual_chap": false, 00:06:52.504 "chap_group": 0, 00:06:52.504 "max_large_datain_per_connection": 64, 00:06:52.504 "max_r2t_per_connection": 4, 00:06:52.504 "pdu_pool_size": 36864, 00:06:52.504 "immediate_data_pool_size": 16384, 00:06:52.504 "data_out_pool_size": 2048 00:06:52.504 } 00:06:52.504 } 00:06:52.504 ] 00:06:52.504 } 00:06:52.504 ] 00:06:52.504 } 00:06:52.504 16:57:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:52.504 16:57:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59040 00:06:52.504 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59040 ']' 00:06:52.504 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59040 00:06:52.504 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:52.504 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.504 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59040 00:06:52.504 killing process with pid 59040 00:06:52.504 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.504 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.504 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59040' 00:06:52.504 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59040 00:06:52.504 16:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59040 00:06:52.762 16:57:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59068 00:06:52.762 16:57:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:52.762 16:57:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:58.033 16:57:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59068 00:06:58.033 16:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59068 ']' 00:06:58.033 16:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59068 00:06:58.033 16:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:58.033 16:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.033 16:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59068 00:06:58.033 killing process with pid 59068 00:06:58.033 16:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.033 16:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.033 16:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59068' 00:06:58.033 16:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59068 00:06:58.033 16:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59068 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:58.292 ************************************ 00:06:58.292 END TEST skip_rpc_with_json 00:06:58.292 ************************************ 00:06:58.292 00:06:58.292 real 0m6.843s 00:06:58.292 user 0m6.555s 00:06:58.292 sys 0m0.579s 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:58.292 16:57:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:58.292 16:57:50 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.292 16:57:50 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.292 16:57:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.292 ************************************ 00:06:58.292 START TEST skip_rpc_with_delay 00:06:58.292 ************************************ 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:58.292 [2024-07-25 16:57:50.703178] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:58.292 [2024-07-25 16:57:50.703286] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.292 ************************************ 00:06:58.292 END TEST skip_rpc_with_delay 00:06:58.292 ************************************ 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.292 00:06:58.292 real 0m0.094s 00:06:58.292 user 0m0.055s 00:06:58.292 sys 0m0.036s 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.292 16:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:58.552 16:57:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:58.552 16:57:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:58.552 16:57:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:58.552 16:57:50 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.552 16:57:50 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.552 16:57:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.552 ************************************ 00:06:58.552 START TEST exit_on_failed_rpc_init 00:06:58.552 ************************************ 00:06:58.552 16:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:58.552 16:57:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59177 00:06:58.552 16:57:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.552 16:57:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59177 00:06:58.552 16:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 59177 ']' 00:06:58.552 16:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.552 16:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.552 16:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.552 16:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.552 16:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:58.552 [2024-07-25 16:57:50.867654] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:58.552 [2024-07-25 16:57:50.867732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59177 ] 00:06:58.552 [2024-07-25 16:57:50.996075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.811 [2024-07-25 16:57:51.087200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.380 16:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.380 16:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:59.380 16:57:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.380 16:57:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:59.380 16:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:59.380 16:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:59.380 16:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:59.380 16:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.380 16:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:59.380 16:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.380 16:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:59.380 16:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.380 16:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:59.380 16:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:59.380 16:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:59.380 [2024-07-25 16:57:51.773749] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:59.380 [2024-07-25 16:57:51.773834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59195 ] 00:06:59.639 [2024-07-25 16:57:51.913197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.639 [2024-07-25 16:57:52.008774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.639 [2024-07-25 16:57:52.008853] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:59.639 [2024-07-25 16:57:52.008864] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:59.639 [2024-07-25 16:57:52.008872] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.639 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:59.639 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.639 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:59.639 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:59.639 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:59.639 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.639 16:57:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:59.639 16:57:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59177 00:06:59.639 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 59177 ']' 00:06:59.639 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 59177 00:06:59.639 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:59.639 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.898 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59177 00:06:59.898 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.898 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.898 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59177' 00:06:59.898 killing process with pid 59177 00:06:59.898 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 59177 00:06:59.898 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 59177 00:07:00.158 00:07:00.158 real 0m1.654s 00:07:00.158 user 0m1.896s 00:07:00.158 sys 0m0.354s 00:07:00.158 ************************************ 00:07:00.158 END TEST exit_on_failed_rpc_init 00:07:00.158 ************************************ 00:07:00.158 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.158 16:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:00.158 16:57:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:00.158 00:07:00.158 real 0m14.394s 00:07:00.158 user 0m13.571s 00:07:00.158 sys 0m1.619s 00:07:00.158 16:57:52 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.158 ************************************ 00:07:00.158 END TEST skip_rpc 00:07:00.158 ************************************ 00:07:00.158 16:57:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.158 16:57:52 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:00.158 16:57:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.158 16:57:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.158 16:57:52 -- common/autotest_common.sh@10 -- # set +x 00:07:00.158 ************************************ 00:07:00.158 START TEST rpc_client 00:07:00.158 ************************************ 00:07:00.158 16:57:52 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:00.417 * Looking for test storage... 00:07:00.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:00.417 16:57:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:00.417 OK 00:07:00.417 16:57:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:00.417 00:07:00.417 real 0m0.154s 00:07:00.417 user 0m0.069s 00:07:00.417 sys 0m0.092s 00:07:00.417 16:57:52 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.417 16:57:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:00.417 ************************************ 00:07:00.417 END TEST rpc_client 00:07:00.417 ************************************ 00:07:00.417 16:57:52 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:00.417 16:57:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.417 16:57:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.417 16:57:52 -- common/autotest_common.sh@10 -- # set +x 00:07:00.417 ************************************ 00:07:00.417 START TEST json_config 00:07:00.417 ************************************ 00:07:00.417 16:57:52 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:00.417 16:57:52 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:00.417 16:57:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:00.417 16:57:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.417 16:57:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.417 16:57:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.417 16:57:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.417 16:57:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4f3ec45a-584a-4a72-a1b0-e42cc578c863 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=4f3ec45a-584a-4a72-a1b0-e42cc578c863 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:00.677 16:57:52 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.677 16:57:52 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.677 16:57:52 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.677 16:57:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.677 16:57:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.677 16:57:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.677 16:57:52 json_config -- paths/export.sh@5 -- # export PATH 00:07:00.677 16:57:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@47 -- # : 0 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:00.677 16:57:52 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:00.677 16:57:52 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:00.677 16:57:52 json_config -- json_config/json_config.sh@11 -- # [[ 1 -eq 1 ]] 00:07:00.677 16:57:52 json_config -- json_config/json_config.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:07:00.677 16:57:52 json_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:07:00.677 16:57:52 json_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:07:00.677 16:57:52 json_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:07:00.677 16:57:52 json_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:07:00.677 16:57:52 json_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:07:00.677 16:57:52 json_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:07:00.677 16:57:52 json_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:07:00.677 16:57:52 json_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:07:00.677 16:57:52 json_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:07:00.678 16:57:52 json_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:07:00.678 16:57:52 json_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:07:00.678 16:57:52 json_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:07:00.678 16:57:52 json_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:07:00.678 16:57:52 json_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:07:00.678 16:57:52 json_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:07:00.678 16:57:52 json_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:07:00.678 16:57:52 json_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:07:00.678 16:57:52 json_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:07:00.678 16:57:52 json_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:07:00.678 INFO: JSON configuration test init 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:07:00.678 16:57:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:00.678 16:57:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:07:00.678 16:57:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:00.678 16:57:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.678 16:57:52 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:07:00.678 16:57:52 json_config -- json_config/common.sh@9 -- # local app=target 00:07:00.678 16:57:52 json_config -- json_config/common.sh@10 -- # shift 00:07:00.678 16:57:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:00.678 16:57:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:00.678 16:57:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:00.678 16:57:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:00.678 16:57:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:00.678 Waiting for target to run... 00:07:00.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:00.678 16:57:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59313 00:07:00.678 16:57:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:00.678 16:57:52 json_config -- json_config/common.sh@25 -- # waitforlisten 59313 /var/tmp/spdk_tgt.sock 00:07:00.678 16:57:52 json_config -- common/autotest_common.sh@831 -- # '[' -z 59313 ']' 00:07:00.678 16:57:52 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:00.678 16:57:52 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.678 16:57:52 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:00.678 16:57:52 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:00.678 16:57:52 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.678 16:57:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.678 [2024-07-25 16:57:53.005300] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:00.678 [2024-07-25 16:57:53.005558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59313 ] 00:07:01.030 [2024-07-25 16:57:53.368273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.289 [2024-07-25 16:57:53.447329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.553 16:57:53 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.553 16:57:53 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:01.553 00:07:01.553 16:57:53 json_config -- json_config/common.sh@26 -- # echo '' 00:07:01.553 16:57:53 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:07:01.553 16:57:53 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:07:01.553 16:57:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:01.553 16:57:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:01.553 16:57:53 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:07:01.553 16:57:53 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:07:01.553 16:57:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:01.553 16:57:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:01.553 16:57:53 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:01.553 16:57:53 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:07:01.553 16:57:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:01.812 16:57:54 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:07:01.812 16:57:54 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:01.812 16:57:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:01.812 16:57:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.070 16:57:54 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:02.070 16:57:54 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:02.070 16:57:54 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:02.071 16:57:54 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:02.071 16:57:54 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:02.071 16:57:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:02.071 16:57:54 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:02.071 16:57:54 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:02.071 16:57:54 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:07:02.071 16:57:54 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:07:02.071 16:57:54 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:07:02.071 16:57:54 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:07:02.071 16:57:54 json_config -- json_config/json_config.sh@51 -- # sort 00:07:02.071 16:57:54 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:07:02.071 16:57:54 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:07:02.071 16:57:54 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:07:02.071 16:57:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:02.071 16:57:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.329 16:57:54 json_config -- json_config/json_config.sh@59 -- # return 0 00:07:02.329 16:57:54 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:02.329 16:57:54 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:02.329 16:57:54 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:07:02.329 16:57:54 json_config -- json_config/json_config.sh@291 -- # create_iscsi_subsystem_config 00:07:02.329 16:57:54 json_config -- json_config/json_config.sh@225 -- # timing_enter create_iscsi_subsystem_config 00:07:02.329 16:57:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:02.329 16:57:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.329 16:57:54 json_config -- json_config/json_config.sh@226 -- # tgt_rpc bdev_malloc_create 64 1024 --name MallocForIscsi0 00:07:02.329 16:57:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 64 1024 --name MallocForIscsi0 00:07:02.329 MallocForIscsi0 00:07:02.329 16:57:54 json_config -- json_config/json_config.sh@227 -- # tgt_rpc iscsi_create_portal_group 1 127.0.0.1:3260 00:07:02.329 16:57:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_portal_group 1 127.0.0.1:3260 00:07:02.587 16:57:54 json_config -- json_config/json_config.sh@228 -- # tgt_rpc iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:07:02.587 16:57:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:07:02.846 16:57:55 json_config -- json_config/json_config.sh@229 -- # tgt_rpc iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:07:02.846 16:57:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:07:02.846 16:57:55 json_config -- json_config/json_config.sh@230 -- # timing_exit create_iscsi_subsystem_config 00:07:02.846 16:57:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:02.846 16:57:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.104 16:57:55 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:07:03.104 16:57:55 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:07:03.104 16:57:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:03.104 16:57:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.104 16:57:55 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:07:03.104 16:57:55 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:03.104 16:57:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:03.363 MallocBdevForConfigChangeCheck 00:07:03.363 16:57:55 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:07:03.363 16:57:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:03.363 16:57:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.363 16:57:55 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:07:03.363 16:57:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:03.622 INFO: shutting down applications... 00:07:03.622 16:57:55 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:07:03.622 16:57:55 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:07:03.622 16:57:55 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:07:03.622 16:57:55 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:07:03.622 16:57:55 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:03.880 Calling clear_iscsi_subsystem 00:07:03.880 Calling clear_nvmf_subsystem 00:07:03.880 Calling clear_nbd_subsystem 00:07:03.880 Calling clear_ublk_subsystem 00:07:03.880 Calling clear_vhost_blk_subsystem 00:07:03.880 Calling clear_vhost_scsi_subsystem 00:07:03.880 Calling clear_bdev_subsystem 00:07:03.880 16:57:56 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:03.880 16:57:56 json_config -- json_config/json_config.sh@347 -- # count=100 00:07:03.880 16:57:56 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:07:03.880 16:57:56 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:03.881 16:57:56 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:03.881 16:57:56 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:04.449 16:57:56 json_config -- json_config/json_config.sh@349 -- # break 00:07:04.449 16:57:56 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:07:04.449 16:57:56 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:07:04.449 16:57:56 json_config -- json_config/common.sh@31 -- # local app=target 00:07:04.449 16:57:56 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:04.449 16:57:56 json_config -- json_config/common.sh@35 -- # [[ -n 59313 ]] 00:07:04.449 16:57:56 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59313 00:07:04.449 16:57:56 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:04.449 16:57:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:04.449 16:57:56 json_config -- json_config/common.sh@41 -- # kill -0 59313 00:07:04.449 16:57:56 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:04.720 16:57:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:04.720 16:57:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:04.720 16:57:57 json_config -- json_config/common.sh@41 -- # kill -0 59313 00:07:04.720 SPDK target shutdown done 00:07:04.720 INFO: relaunching applications... 00:07:04.720 16:57:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:04.720 16:57:57 json_config -- json_config/common.sh@43 -- # break 00:07:04.720 16:57:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:04.720 16:57:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:04.720 16:57:57 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:07:04.720 16:57:57 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:04.720 16:57:57 json_config -- json_config/common.sh@9 -- # local app=target 00:07:04.720 16:57:57 json_config -- json_config/common.sh@10 -- # shift 00:07:04.720 16:57:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:04.720 16:57:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:04.720 16:57:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:04.720 16:57:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:04.720 Waiting for target to run... 00:07:04.720 16:57:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:04.720 16:57:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59489 00:07:04.720 16:57:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:04.720 16:57:57 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:04.720 16:57:57 json_config -- json_config/common.sh@25 -- # waitforlisten 59489 /var/tmp/spdk_tgt.sock 00:07:04.720 16:57:57 json_config -- common/autotest_common.sh@831 -- # '[' -z 59489 ']' 00:07:04.720 16:57:57 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:04.720 16:57:57 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.720 16:57:57 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:04.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:04.720 16:57:57 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.720 16:57:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.981 [2024-07-25 16:57:57.241876] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:04.981 [2024-07-25 16:57:57.242144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59489 ] 00:07:05.239 [2024-07-25 16:57:57.609287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.239 [2024-07-25 16:57:57.686773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.805 00:07:05.805 INFO: Checking if target configuration is the same... 00:07:05.805 16:57:58 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.805 16:57:58 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:05.805 16:57:58 json_config -- json_config/common.sh@26 -- # echo '' 00:07:05.805 16:57:58 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:07:05.805 16:57:58 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:05.805 16:57:58 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:07:05.805 16:57:58 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:05.805 16:57:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:05.805 + '[' 2 -ne 2 ']' 00:07:05.805 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:05.805 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:05.805 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:05.805 +++ basename /dev/fd/62 00:07:05.805 ++ mktemp /tmp/62.XXX 00:07:05.805 + tmp_file_1=/tmp/62.p8m 00:07:05.806 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:05.806 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:05.806 + tmp_file_2=/tmp/spdk_tgt_config.json.gC0 00:07:05.806 + ret=0 00:07:05.806 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:06.063 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:06.063 + diff -u /tmp/62.p8m /tmp/spdk_tgt_config.json.gC0 00:07:06.063 INFO: JSON config files are the same 00:07:06.063 + echo 'INFO: JSON config files are the same' 00:07:06.063 + rm /tmp/62.p8m /tmp/spdk_tgt_config.json.gC0 00:07:06.063 + exit 0 00:07:06.063 INFO: changing configuration and checking if this can be detected... 00:07:06.063 16:57:58 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:07:06.063 16:57:58 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:06.063 16:57:58 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:06.064 16:57:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:06.322 16:57:58 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:06.322 16:57:58 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:07:06.322 16:57:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:06.322 + '[' 2 -ne 2 ']' 00:07:06.322 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:06.322 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:06.322 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:06.322 +++ basename /dev/fd/62 00:07:06.322 ++ mktemp /tmp/62.XXX 00:07:06.322 + tmp_file_1=/tmp/62.qsF 00:07:06.322 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:06.322 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:06.322 + tmp_file_2=/tmp/spdk_tgt_config.json.cVK 00:07:06.322 + ret=0 00:07:06.322 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:06.580 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:06.839 + diff -u /tmp/62.qsF /tmp/spdk_tgt_config.json.cVK 00:07:06.839 + ret=1 00:07:06.839 + echo '=== Start of file: /tmp/62.qsF ===' 00:07:06.839 + cat /tmp/62.qsF 00:07:06.839 + echo '=== End of file: /tmp/62.qsF ===' 00:07:06.839 + echo '' 00:07:06.839 + echo '=== Start of file: /tmp/spdk_tgt_config.json.cVK ===' 00:07:06.839 + cat /tmp/spdk_tgt_config.json.cVK 00:07:06.839 + echo '=== End of file: /tmp/spdk_tgt_config.json.cVK ===' 00:07:06.839 + echo '' 00:07:06.839 + rm /tmp/62.qsF /tmp/spdk_tgt_config.json.cVK 00:07:06.839 + exit 1 00:07:06.839 INFO: configuration change detected. 00:07:06.839 16:57:59 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:07:06.839 16:57:59 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:07:06.839 16:57:59 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:07:06.839 16:57:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:06.839 16:57:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.839 16:57:59 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:07:06.839 16:57:59 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:07:06.839 16:57:59 json_config -- json_config/json_config.sh@321 -- # [[ -n 59489 ]] 00:07:06.839 16:57:59 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:07:06.839 16:57:59 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:07:06.839 16:57:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:06.839 16:57:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.839 16:57:59 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:07:06.839 16:57:59 json_config -- json_config/json_config.sh@197 -- # uname -s 00:07:06.839 16:57:59 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:07:06.839 16:57:59 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:07:06.839 16:57:59 json_config -- json_config/json_config.sh@201 -- # [[ 1 -eq 1 ]] 00:07:06.839 16:57:59 json_config -- json_config/json_config.sh@202 -- # rbd_cleanup 00:07:06.839 16:57:59 json_config -- common/autotest_common.sh@1033 -- # hash ceph 00:07:06.839 16:57:59 json_config -- common/autotest_common.sh@1034 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:07:06.839 + base_dir=/var/tmp/ceph 00:07:06.839 + image=/var/tmp/ceph/ceph_raw.img 00:07:06.839 + dev=/dev/loop200 00:07:06.839 + pkill -9 ceph 00:07:06.839 + sleep 3 00:07:10.122 + umount /dev/loop200p2 00:07:10.122 umount: /dev/loop200p2: no mount point specified. 00:07:10.122 + losetup -d /dev/loop200 00:07:10.122 losetup: /dev/loop200: failed to use device: No such device 00:07:10.122 + rm -rf /var/tmp/ceph 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@1035 -- # rm -f /var/tmp/ceph_raw.img 00:07:10.122 16:58:02 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.122 16:58:02 json_config -- json_config/json_config.sh@327 -- # killprocess 59489 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@950 -- # '[' -z 59489 ']' 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@954 -- # kill -0 59489 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@955 -- # uname 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59489 00:07:10.122 killing process with pid 59489 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59489' 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@969 -- # kill 59489 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@974 -- # wait 59489 00:07:10.122 16:58:02 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:10.122 16:58:02 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.122 INFO: Success 00:07:10.122 16:58:02 json_config -- json_config/json_config.sh@332 -- # return 0 00:07:10.122 16:58:02 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:07:10.122 ************************************ 00:07:10.122 END TEST json_config 00:07:10.122 ************************************ 00:07:10.122 00:07:10.122 real 0m9.777s 00:07:10.122 user 0m11.739s 00:07:10.122 sys 0m1.817s 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.122 16:58:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.381 16:58:02 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:10.381 16:58:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.381 16:58:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.381 16:58:02 -- common/autotest_common.sh@10 -- # set +x 00:07:10.381 ************************************ 00:07:10.381 START TEST json_config_extra_key 00:07:10.381 ************************************ 00:07:10.381 16:58:02 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:10.381 16:58:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4f3ec45a-584a-4a72-a1b0-e42cc578c863 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=4f3ec45a-584a-4a72-a1b0-e42cc578c863 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:10.381 16:58:02 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.381 16:58:02 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.381 16:58:02 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.381 16:58:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.381 16:58:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.381 16:58:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.381 16:58:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:10.381 16:58:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:10.381 16:58:02 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:10.381 16:58:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:10.381 16:58:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:10.381 16:58:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:10.381 16:58:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:10.381 16:58:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:10.381 16:58:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:10.381 16:58:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:10.381 16:58:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:10.381 16:58:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:10.381 16:58:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:10.381 INFO: launching applications... 00:07:10.381 16:58:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:10.381 16:58:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:10.381 16:58:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:10.381 16:58:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:10.381 16:58:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:10.381 16:58:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:10.381 Waiting for target to run... 00:07:10.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:10.381 16:58:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:10.381 16:58:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:10.381 16:58:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:10.381 16:58:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59670 00:07:10.381 16:58:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:10.381 16:58:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59670 /var/tmp/spdk_tgt.sock 00:07:10.381 16:58:02 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:10.381 16:58:02 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 59670 ']' 00:07:10.381 16:58:02 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:10.381 16:58:02 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.381 16:58:02 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:10.381 16:58:02 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.381 16:58:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:10.639 [2024-07-25 16:58:02.852883] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:10.639 [2024-07-25 16:58:02.852961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59670 ] 00:07:10.896 [2024-07-25 16:58:03.209414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.897 [2024-07-25 16:58:03.287889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.461 00:07:11.461 INFO: shutting down applications... 00:07:11.461 16:58:03 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.461 16:58:03 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:11.461 16:58:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:11.461 16:58:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:11.461 16:58:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:11.461 16:58:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:11.461 16:58:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:11.461 16:58:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59670 ]] 00:07:11.461 16:58:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59670 00:07:11.462 16:58:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:11.462 16:58:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:11.462 16:58:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59670 00:07:11.462 16:58:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:12.037 SPDK target shutdown done 00:07:12.037 Success 00:07:12.037 16:58:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:12.037 16:58:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:12.037 16:58:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59670 00:07:12.037 16:58:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:12.037 16:58:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:12.037 16:58:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:12.037 16:58:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:12.037 16:58:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:12.037 00:07:12.037 real 0m1.554s 00:07:12.037 user 0m1.268s 00:07:12.037 sys 0m0.412s 00:07:12.037 16:58:04 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.037 16:58:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:12.037 ************************************ 00:07:12.037 END TEST json_config_extra_key 00:07:12.037 ************************************ 00:07:12.037 16:58:04 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:12.037 16:58:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.037 16:58:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.037 16:58:04 -- common/autotest_common.sh@10 -- # set +x 00:07:12.037 ************************************ 00:07:12.037 START TEST alias_rpc 00:07:12.037 ************************************ 00:07:12.037 16:58:04 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:12.037 * Looking for test storage... 00:07:12.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:12.037 16:58:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:12.037 16:58:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59734 00:07:12.037 16:58:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59734 00:07:12.037 16:58:04 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 59734 ']' 00:07:12.037 16:58:04 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.037 16:58:04 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.037 16:58:04 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.037 16:58:04 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.037 16:58:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.037 16:58:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:12.037 [2024-07-25 16:58:04.488735] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:12.037 [2024-07-25 16:58:04.488825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59734 ] 00:07:12.295 [2024-07-25 16:58:04.631038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.295 [2024-07-25 16:58:04.736754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.862 16:58:05 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.862 16:58:05 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:12.862 16:58:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:13.120 16:58:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59734 00:07:13.120 16:58:05 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 59734 ']' 00:07:13.120 16:58:05 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 59734 00:07:13.120 16:58:05 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:13.120 16:58:05 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.120 16:58:05 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59734 00:07:13.120 killing process with pid 59734 00:07:13.120 16:58:05 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.120 16:58:05 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.120 16:58:05 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59734' 00:07:13.120 16:58:05 alias_rpc -- common/autotest_common.sh@969 -- # kill 59734 00:07:13.120 16:58:05 alias_rpc -- common/autotest_common.sh@974 -- # wait 59734 00:07:13.687 ************************************ 00:07:13.687 END TEST alias_rpc 00:07:13.687 ************************************ 00:07:13.687 00:07:13.687 real 0m1.607s 00:07:13.687 user 0m1.685s 00:07:13.687 sys 0m0.430s 00:07:13.687 16:58:05 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.687 16:58:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.687 16:58:05 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:13.687 16:58:05 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:13.687 16:58:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.687 16:58:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.687 16:58:05 -- common/autotest_common.sh@10 -- # set +x 00:07:13.687 ************************************ 00:07:13.687 START TEST spdkcli_tcp 00:07:13.687 ************************************ 00:07:13.687 16:58:05 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:13.687 * Looking for test storage... 00:07:13.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:13.687 16:58:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:13.687 16:58:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:13.687 16:58:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:13.687 16:58:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:13.687 16:58:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:13.687 16:58:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:13.687 16:58:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:13.687 16:58:06 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:13.687 16:58:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:13.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.687 16:58:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59805 00:07:13.687 16:58:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59805 00:07:13.687 16:58:06 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 59805 ']' 00:07:13.687 16:58:06 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.687 16:58:06 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.687 16:58:06 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.687 16:58:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:13.687 16:58:06 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.687 16:58:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:13.946 [2024-07-25 16:58:06.157501] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:13.946 [2024-07-25 16:58:06.157623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59805 ] 00:07:13.946 [2024-07-25 16:58:06.301565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.946 [2024-07-25 16:58:06.400262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.946 [2024-07-25 16:58:06.400271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.883 16:58:06 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.883 16:58:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:14.883 16:58:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59822 00:07:14.883 16:58:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:14.883 16:58:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:14.883 [ 00:07:14.883 "bdev_malloc_delete", 00:07:14.883 "bdev_malloc_create", 00:07:14.883 "bdev_null_resize", 00:07:14.883 "bdev_null_delete", 00:07:14.883 "bdev_null_create", 00:07:14.883 "bdev_nvme_cuse_unregister", 00:07:14.883 "bdev_nvme_cuse_register", 00:07:14.883 "bdev_opal_new_user", 00:07:14.883 "bdev_opal_set_lock_state", 00:07:14.883 "bdev_opal_delete", 00:07:14.883 "bdev_opal_get_info", 00:07:14.883 "bdev_opal_create", 00:07:14.883 "bdev_nvme_opal_revert", 00:07:14.883 "bdev_nvme_opal_init", 00:07:14.883 "bdev_nvme_send_cmd", 00:07:14.883 "bdev_nvme_get_path_iostat", 00:07:14.883 "bdev_nvme_get_mdns_discovery_info", 00:07:14.883 "bdev_nvme_stop_mdns_discovery", 00:07:14.883 "bdev_nvme_start_mdns_discovery", 00:07:14.883 "bdev_nvme_set_multipath_policy", 00:07:14.883 "bdev_nvme_set_preferred_path", 00:07:14.883 "bdev_nvme_get_io_paths", 00:07:14.883 "bdev_nvme_remove_error_injection", 00:07:14.883 "bdev_nvme_add_error_injection", 00:07:14.883 "bdev_nvme_get_discovery_info", 00:07:14.883 "bdev_nvme_stop_discovery", 00:07:14.883 "bdev_nvme_start_discovery", 00:07:14.883 "bdev_nvme_get_controller_health_info", 00:07:14.883 "bdev_nvme_disable_controller", 00:07:14.883 "bdev_nvme_enable_controller", 00:07:14.883 "bdev_nvme_reset_controller", 00:07:14.883 "bdev_nvme_get_transport_statistics", 00:07:14.883 "bdev_nvme_apply_firmware", 00:07:14.883 "bdev_nvme_detach_controller", 00:07:14.883 "bdev_nvme_get_controllers", 00:07:14.883 "bdev_nvme_attach_controller", 00:07:14.883 "bdev_nvme_set_hotplug", 00:07:14.883 "bdev_nvme_set_options", 00:07:14.883 "bdev_passthru_delete", 00:07:14.883 "bdev_passthru_create", 00:07:14.883 "bdev_lvol_set_parent_bdev", 00:07:14.883 "bdev_lvol_set_parent", 00:07:14.883 "bdev_lvol_check_shallow_copy", 00:07:14.883 "bdev_lvol_start_shallow_copy", 00:07:14.883 "bdev_lvol_grow_lvstore", 00:07:14.883 "bdev_lvol_get_lvols", 00:07:14.883 "bdev_lvol_get_lvstores", 00:07:14.883 "bdev_lvol_delete", 00:07:14.883 "bdev_lvol_set_read_only", 00:07:14.883 "bdev_lvol_resize", 00:07:14.883 "bdev_lvol_decouple_parent", 00:07:14.883 "bdev_lvol_inflate", 00:07:14.883 "bdev_lvol_rename", 00:07:14.883 "bdev_lvol_clone_bdev", 00:07:14.883 "bdev_lvol_clone", 00:07:14.883 "bdev_lvol_snapshot", 00:07:14.883 "bdev_lvol_create", 00:07:14.883 "bdev_lvol_delete_lvstore", 00:07:14.883 "bdev_lvol_rename_lvstore", 00:07:14.883 "bdev_lvol_create_lvstore", 00:07:14.883 "bdev_raid_set_options", 00:07:14.883 "bdev_raid_remove_base_bdev", 00:07:14.883 "bdev_raid_add_base_bdev", 00:07:14.883 "bdev_raid_delete", 00:07:14.883 "bdev_raid_create", 00:07:14.883 "bdev_raid_get_bdevs", 00:07:14.883 "bdev_error_inject_error", 00:07:14.883 "bdev_error_delete", 00:07:14.883 "bdev_error_create", 00:07:14.883 "bdev_split_delete", 00:07:14.883 "bdev_split_create", 00:07:14.883 "bdev_delay_delete", 00:07:14.883 "bdev_delay_create", 00:07:14.883 "bdev_delay_update_latency", 00:07:14.883 "bdev_zone_block_delete", 00:07:14.883 "bdev_zone_block_create", 00:07:14.883 "blobfs_create", 00:07:14.883 "blobfs_detect", 00:07:14.883 "blobfs_set_cache_size", 00:07:14.883 "bdev_aio_delete", 00:07:14.883 "bdev_aio_rescan", 00:07:14.883 "bdev_aio_create", 00:07:14.883 "bdev_ftl_set_property", 00:07:14.883 "bdev_ftl_get_properties", 00:07:14.883 "bdev_ftl_get_stats", 00:07:14.883 "bdev_ftl_unmap", 00:07:14.883 "bdev_ftl_unload", 00:07:14.883 "bdev_ftl_delete", 00:07:14.883 "bdev_ftl_load", 00:07:14.883 "bdev_ftl_create", 00:07:14.883 "bdev_virtio_attach_controller", 00:07:14.883 "bdev_virtio_scsi_get_devices", 00:07:14.883 "bdev_virtio_detach_controller", 00:07:14.883 "bdev_virtio_blk_set_hotplug", 00:07:14.883 "bdev_iscsi_delete", 00:07:14.883 "bdev_iscsi_create", 00:07:14.883 "bdev_iscsi_set_options", 00:07:14.883 "bdev_rbd_get_clusters_info", 00:07:14.883 "bdev_rbd_unregister_cluster", 00:07:14.883 "bdev_rbd_register_cluster", 00:07:14.883 "bdev_rbd_resize", 00:07:14.883 "bdev_rbd_delete", 00:07:14.883 "bdev_rbd_create", 00:07:14.883 "accel_error_inject_error", 00:07:14.883 "ioat_scan_accel_module", 00:07:14.883 "dsa_scan_accel_module", 00:07:14.883 "iaa_scan_accel_module", 00:07:14.883 "keyring_file_remove_key", 00:07:14.883 "keyring_file_add_key", 00:07:14.883 "keyring_linux_set_options", 00:07:14.883 "iscsi_get_histogram", 00:07:14.883 "iscsi_enable_histogram", 00:07:14.883 "iscsi_set_options", 00:07:14.883 "iscsi_get_auth_groups", 00:07:14.883 "iscsi_auth_group_remove_secret", 00:07:14.883 "iscsi_auth_group_add_secret", 00:07:14.883 "iscsi_delete_auth_group", 00:07:14.883 "iscsi_create_auth_group", 00:07:14.883 "iscsi_set_discovery_auth", 00:07:14.883 "iscsi_get_options", 00:07:14.883 "iscsi_target_node_request_logout", 00:07:14.883 "iscsi_target_node_set_redirect", 00:07:14.883 "iscsi_target_node_set_auth", 00:07:14.883 "iscsi_target_node_add_lun", 00:07:14.883 "iscsi_get_stats", 00:07:14.883 "iscsi_get_connections", 00:07:14.883 "iscsi_portal_group_set_auth", 00:07:14.883 "iscsi_start_portal_group", 00:07:14.883 "iscsi_delete_portal_group", 00:07:14.883 "iscsi_create_portal_group", 00:07:14.883 "iscsi_get_portal_groups", 00:07:14.883 "iscsi_delete_target_node", 00:07:14.883 "iscsi_target_node_remove_pg_ig_maps", 00:07:14.883 "iscsi_target_node_add_pg_ig_maps", 00:07:14.883 "iscsi_create_target_node", 00:07:14.883 "iscsi_get_target_nodes", 00:07:14.883 "iscsi_delete_initiator_group", 00:07:14.883 "iscsi_initiator_group_remove_initiators", 00:07:14.883 "iscsi_initiator_group_add_initiators", 00:07:14.883 "iscsi_create_initiator_group", 00:07:14.883 "iscsi_get_initiator_groups", 00:07:14.883 "nvmf_set_crdt", 00:07:14.883 "nvmf_set_config", 00:07:14.883 "nvmf_set_max_subsystems", 00:07:14.883 "nvmf_stop_mdns_prr", 00:07:14.883 "nvmf_publish_mdns_prr", 00:07:14.883 "nvmf_subsystem_get_listeners", 00:07:14.884 "nvmf_subsystem_get_qpairs", 00:07:14.884 "nvmf_subsystem_get_controllers", 00:07:14.884 "nvmf_get_stats", 00:07:14.884 "nvmf_get_transports", 00:07:14.884 "nvmf_create_transport", 00:07:14.884 "nvmf_get_targets", 00:07:14.884 "nvmf_delete_target", 00:07:14.884 "nvmf_create_target", 00:07:14.884 "nvmf_subsystem_allow_any_host", 00:07:14.884 "nvmf_subsystem_remove_host", 00:07:14.884 "nvmf_subsystem_add_host", 00:07:14.884 "nvmf_ns_remove_host", 00:07:14.884 "nvmf_ns_add_host", 00:07:14.884 "nvmf_subsystem_remove_ns", 00:07:14.884 "nvmf_subsystem_add_ns", 00:07:14.884 "nvmf_subsystem_listener_set_ana_state", 00:07:14.884 "nvmf_discovery_get_referrals", 00:07:14.884 "nvmf_discovery_remove_referral", 00:07:14.884 "nvmf_discovery_add_referral", 00:07:14.884 "nvmf_subsystem_remove_listener", 00:07:14.884 "nvmf_subsystem_add_listener", 00:07:14.884 "nvmf_delete_subsystem", 00:07:14.884 "nvmf_create_subsystem", 00:07:14.884 "nvmf_get_subsystems", 00:07:14.884 "env_dpdk_get_mem_stats", 00:07:14.884 "nbd_get_disks", 00:07:14.884 "nbd_stop_disk", 00:07:14.884 "nbd_start_disk", 00:07:14.884 "ublk_recover_disk", 00:07:14.884 "ublk_get_disks", 00:07:14.884 "ublk_stop_disk", 00:07:14.884 "ublk_start_disk", 00:07:14.884 "ublk_destroy_target", 00:07:14.884 "ublk_create_target", 00:07:14.884 "virtio_blk_create_transport", 00:07:14.884 "virtio_blk_get_transports", 00:07:14.884 "vhost_controller_set_coalescing", 00:07:14.884 "vhost_get_controllers", 00:07:14.884 "vhost_delete_controller", 00:07:14.884 "vhost_create_blk_controller", 00:07:14.884 "vhost_scsi_controller_remove_target", 00:07:14.884 "vhost_scsi_controller_add_target", 00:07:14.884 "vhost_start_scsi_controller", 00:07:14.884 "vhost_create_scsi_controller", 00:07:14.884 "thread_set_cpumask", 00:07:14.884 "framework_get_governor", 00:07:14.884 "framework_get_scheduler", 00:07:14.884 "framework_set_scheduler", 00:07:14.884 "framework_get_reactors", 00:07:14.884 "thread_get_io_channels", 00:07:14.884 "thread_get_pollers", 00:07:14.884 "thread_get_stats", 00:07:14.884 "framework_monitor_context_switch", 00:07:14.884 "spdk_kill_instance", 00:07:14.884 "log_enable_timestamps", 00:07:14.884 "log_get_flags", 00:07:14.884 "log_clear_flag", 00:07:14.884 "log_set_flag", 00:07:14.884 "log_get_level", 00:07:14.884 "log_set_level", 00:07:14.884 "log_get_print_level", 00:07:14.884 "log_set_print_level", 00:07:14.884 "framework_enable_cpumask_locks", 00:07:14.884 "framework_disable_cpumask_locks", 00:07:14.884 "framework_wait_init", 00:07:14.884 "framework_start_init", 00:07:14.884 "scsi_get_devices", 00:07:14.884 "bdev_get_histogram", 00:07:14.884 "bdev_enable_histogram", 00:07:14.884 "bdev_set_qos_limit", 00:07:14.884 "bdev_set_qd_sampling_period", 00:07:14.884 "bdev_get_bdevs", 00:07:14.884 "bdev_reset_iostat", 00:07:14.884 "bdev_get_iostat", 00:07:14.884 "bdev_examine", 00:07:14.884 "bdev_wait_for_examine", 00:07:14.884 "bdev_set_options", 00:07:14.884 "notify_get_notifications", 00:07:14.884 "notify_get_types", 00:07:14.884 "accel_get_stats", 00:07:14.884 "accel_set_options", 00:07:14.884 "accel_set_driver", 00:07:14.884 "accel_crypto_key_destroy", 00:07:14.884 "accel_crypto_keys_get", 00:07:14.884 "accel_crypto_key_create", 00:07:14.884 "accel_assign_opc", 00:07:14.884 "accel_get_module_info", 00:07:14.884 "accel_get_opc_assignments", 00:07:14.884 "vmd_rescan", 00:07:14.884 "vmd_remove_device", 00:07:14.884 "vmd_enable", 00:07:14.884 "sock_get_default_impl", 00:07:14.884 "sock_set_default_impl", 00:07:14.884 "sock_impl_set_options", 00:07:14.884 "sock_impl_get_options", 00:07:14.884 "iobuf_get_stats", 00:07:14.884 "iobuf_set_options", 00:07:14.884 "framework_get_pci_devices", 00:07:14.884 "framework_get_config", 00:07:14.884 "framework_get_subsystems", 00:07:14.884 "trace_get_info", 00:07:14.884 "trace_get_tpoint_group_mask", 00:07:14.884 "trace_disable_tpoint_group", 00:07:14.884 "trace_enable_tpoint_group", 00:07:14.884 "trace_clear_tpoint_mask", 00:07:14.884 "trace_set_tpoint_mask", 00:07:14.884 "keyring_get_keys", 00:07:14.884 "spdk_get_version", 00:07:14.884 "rpc_get_methods" 00:07:14.884 ] 00:07:14.884 16:58:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:14.884 16:58:07 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:14.884 16:58:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:14.884 16:58:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:14.884 16:58:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59805 00:07:14.884 16:58:07 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 59805 ']' 00:07:14.884 16:58:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 59805 00:07:14.884 16:58:07 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:14.884 16:58:07 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.884 16:58:07 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59805 00:07:14.884 16:58:07 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.884 16:58:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.884 16:58:07 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59805' 00:07:14.884 killing process with pid 59805 00:07:14.884 16:58:07 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 59805 00:07:14.884 16:58:07 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 59805 00:07:15.452 ************************************ 00:07:15.452 END TEST spdkcli_tcp 00:07:15.452 ************************************ 00:07:15.452 00:07:15.452 real 0m1.668s 00:07:15.452 user 0m2.902s 00:07:15.452 sys 0m0.484s 00:07:15.452 16:58:07 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.452 16:58:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.452 16:58:07 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:15.452 16:58:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.452 16:58:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.452 16:58:07 -- common/autotest_common.sh@10 -- # set +x 00:07:15.452 ************************************ 00:07:15.452 START TEST dpdk_mem_utility 00:07:15.452 ************************************ 00:07:15.452 16:58:07 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:15.452 * Looking for test storage... 00:07:15.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:15.452 16:58:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:15.452 16:58:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59896 00:07:15.452 16:58:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:15.452 16:58:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59896 00:07:15.452 16:58:07 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 59896 ']' 00:07:15.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.452 16:58:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.452 16:58:07 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.452 16:58:07 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.452 16:58:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.452 16:58:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:15.452 [2024-07-25 16:58:07.918018] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:15.452 [2024-07-25 16:58:07.918135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59896 ] 00:07:15.711 [2024-07-25 16:58:08.058546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.711 [2024-07-25 16:58:08.158636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.649 16:58:08 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.649 16:58:08 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:16.649 16:58:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:16.649 16:58:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:16.649 16:58:08 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.649 16:58:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:16.649 { 00:07:16.649 "filename": "/tmp/spdk_mem_dump.txt" 00:07:16.649 } 00:07:16.649 16:58:08 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.649 16:58:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:16.649 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:16.649 1 heaps totaling size 814.000000 MiB 00:07:16.649 size: 814.000000 MiB heap id: 0 00:07:16.649 end heaps---------- 00:07:16.649 8 mempools totaling size 598.116089 MiB 00:07:16.649 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:16.649 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:16.649 size: 84.521057 MiB name: bdev_io_59896 00:07:16.649 size: 51.011292 MiB name: evtpool_59896 00:07:16.649 size: 50.003479 MiB name: msgpool_59896 00:07:16.649 size: 21.763794 MiB name: PDU_Pool 00:07:16.649 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:16.649 size: 0.026123 MiB name: Session_Pool 00:07:16.649 end mempools------- 00:07:16.649 6 memzones totaling size 4.142822 MiB 00:07:16.649 size: 1.000366 MiB name: RG_ring_0_59896 00:07:16.649 size: 1.000366 MiB name: RG_ring_1_59896 00:07:16.649 size: 1.000366 MiB name: RG_ring_4_59896 00:07:16.649 size: 1.000366 MiB name: RG_ring_5_59896 00:07:16.649 size: 0.125366 MiB name: RG_ring_2_59896 00:07:16.649 size: 0.015991 MiB name: RG_ring_3_59896 00:07:16.649 end memzones------- 00:07:16.649 16:58:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:16.649 heap id: 0 total size: 814.000000 MiB number of busy elements: 298 number of free elements: 15 00:07:16.649 list of free elements. size: 12.472290 MiB 00:07:16.649 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:16.649 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:16.649 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:16.649 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:16.649 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:16.649 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:16.649 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:16.649 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:16.649 element at address: 0x200000200000 with size: 0.833191 MiB 00:07:16.649 element at address: 0x20001aa00000 with size: 0.568237 MiB 00:07:16.649 element at address: 0x20000b200000 with size: 0.489807 MiB 00:07:16.649 element at address: 0x200000800000 with size: 0.486145 MiB 00:07:16.649 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:16.649 element at address: 0x200027e00000 with size: 0.396301 MiB 00:07:16.649 element at address: 0x200003a00000 with size: 0.347839 MiB 00:07:16.649 list of standard malloc elements. size: 199.265137 MiB 00:07:16.649 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:16.649 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:16.649 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:16.649 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:16.649 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:16.649 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:16.649 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:16.649 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:16.649 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:16.649 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:07:16.649 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000087c740 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000087c800 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000087c980 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59180 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59240 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59300 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59480 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59540 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59600 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59780 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59840 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59900 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:16.650 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:16.650 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:07:16.650 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:16.651 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e65740 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e65800 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6c400 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:16.651 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:16.651 list of memzone associated elements. size: 602.262573 MiB 00:07:16.651 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:16.651 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:16.651 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:16.651 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:16.651 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:16.651 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59896_0 00:07:16.651 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:16.651 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59896_0 00:07:16.651 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:16.651 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59896_0 00:07:16.651 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:16.651 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:16.651 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:16.651 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:16.651 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:16.651 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59896 00:07:16.651 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:16.651 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59896 00:07:16.651 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:16.651 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59896 00:07:16.651 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:16.651 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:16.651 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:16.651 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:16.651 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:16.651 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:16.651 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:16.651 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:16.651 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:16.651 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59896 00:07:16.651 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:16.651 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59896 00:07:16.651 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:16.651 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59896 00:07:16.651 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:16.651 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59896 00:07:16.652 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:16.652 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59896 00:07:16.652 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:16.652 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:16.652 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:16.652 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:16.652 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:16.652 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:16.652 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:16.652 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59896 00:07:16.652 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:16.652 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:16.652 element at address: 0x200027e658c0 with size: 0.023743 MiB 00:07:16.652 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:16.652 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:16.652 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59896 00:07:16.652 element at address: 0x200027e6ba00 with size: 0.002441 MiB 00:07:16.652 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:16.652 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:07:16.652 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59896 00:07:16.652 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:16.652 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59896 00:07:16.652 element at address: 0x200027e6c4c0 with size: 0.000305 MiB 00:07:16.652 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:16.652 16:58:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:16.652 16:58:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59896 00:07:16.652 16:58:08 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 59896 ']' 00:07:16.652 16:58:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 59896 00:07:16.652 16:58:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:16.652 16:58:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.652 16:58:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59896 00:07:16.652 killing process with pid 59896 00:07:16.652 16:58:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.652 16:58:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.652 16:58:08 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59896' 00:07:16.652 16:58:08 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 59896 00:07:16.652 16:58:08 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 59896 00:07:16.911 00:07:16.911 real 0m1.525s 00:07:16.911 user 0m1.538s 00:07:16.911 sys 0m0.427s 00:07:16.911 16:58:09 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.911 ************************************ 00:07:16.911 END TEST dpdk_mem_utility 00:07:16.911 ************************************ 00:07:16.911 16:58:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:16.911 16:58:09 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:16.911 16:58:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.912 16:58:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.912 16:58:09 -- common/autotest_common.sh@10 -- # set +x 00:07:16.912 ************************************ 00:07:16.912 START TEST event 00:07:16.912 ************************************ 00:07:16.912 16:58:09 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:17.171 * Looking for test storage... 00:07:17.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:17.171 16:58:09 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:17.171 16:58:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:17.171 16:58:09 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:17.171 16:58:09 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:17.171 16:58:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.171 16:58:09 event -- common/autotest_common.sh@10 -- # set +x 00:07:17.171 ************************************ 00:07:17.171 START TEST event_perf 00:07:17.171 ************************************ 00:07:17.171 16:58:09 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:17.171 Running I/O for 1 seconds...[2024-07-25 16:58:09.454684] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:17.171 [2024-07-25 16:58:09.454941] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59965 ] 00:07:17.171 [2024-07-25 16:58:09.599578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.430 [2024-07-25 16:58:09.702378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.430 [2024-07-25 16:58:09.702495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.430 [2024-07-25 16:58:09.702548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.430 Running I/O for 1 seconds...[2024-07-25 16:58:09.702553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.449 00:07:18.449 lcore 0: 212598 00:07:18.449 lcore 1: 212600 00:07:18.449 lcore 2: 212602 00:07:18.449 lcore 3: 212596 00:07:18.449 done. 00:07:18.449 00:07:18.449 real 0m1.356s 00:07:18.449 user 0m4.155s 00:07:18.449 sys 0m0.073s 00:07:18.449 16:58:10 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.449 16:58:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:18.449 ************************************ 00:07:18.449 END TEST event_perf 00:07:18.449 ************************************ 00:07:18.449 16:58:10 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:18.449 16:58:10 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:18.449 16:58:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.449 16:58:10 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.449 ************************************ 00:07:18.449 START TEST event_reactor 00:07:18.449 ************************************ 00:07:18.449 16:58:10 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:18.449 [2024-07-25 16:58:10.872114] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:18.449 [2024-07-25 16:58:10.872234] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60010 ] 00:07:18.707 [2024-07-25 16:58:11.025314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.707 [2024-07-25 16:58:11.108298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.084 test_start 00:07:20.084 oneshot 00:07:20.084 tick 100 00:07:20.084 tick 100 00:07:20.084 tick 250 00:07:20.084 tick 100 00:07:20.084 tick 100 00:07:20.084 tick 100 00:07:20.084 tick 500 00:07:20.084 tick 250 00:07:20.084 tick 100 00:07:20.084 tick 100 00:07:20.084 tick 250 00:07:20.084 tick 100 00:07:20.084 tick 100 00:07:20.084 test_end 00:07:20.084 00:07:20.084 real 0m1.336s 00:07:20.084 user 0m1.167s 00:07:20.084 sys 0m0.063s 00:07:20.084 16:58:12 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.084 16:58:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:20.084 ************************************ 00:07:20.084 END TEST event_reactor 00:07:20.084 ************************************ 00:07:20.084 16:58:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:20.084 16:58:12 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:20.084 16:58:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.084 16:58:12 event -- common/autotest_common.sh@10 -- # set +x 00:07:20.084 ************************************ 00:07:20.084 START TEST event_reactor_perf 00:07:20.084 ************************************ 00:07:20.084 16:58:12 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:20.084 [2024-07-25 16:58:12.276397] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:20.084 [2024-07-25 16:58:12.276484] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60040 ] 00:07:20.084 [2024-07-25 16:58:12.419380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.084 [2024-07-25 16:58:12.505589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.461 test_start 00:07:21.461 test_end 00:07:21.461 Performance: 493085 events per second 00:07:21.461 00:07:21.461 real 0m1.325s 00:07:21.461 user 0m1.162s 00:07:21.461 sys 0m0.057s 00:07:21.461 16:58:13 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.461 16:58:13 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:21.461 ************************************ 00:07:21.461 END TEST event_reactor_perf 00:07:21.461 ************************************ 00:07:21.461 16:58:13 event -- event/event.sh@49 -- # uname -s 00:07:21.461 16:58:13 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:21.461 16:58:13 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:21.461 16:58:13 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.461 16:58:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.461 16:58:13 event -- common/autotest_common.sh@10 -- # set +x 00:07:21.461 ************************************ 00:07:21.461 START TEST event_scheduler 00:07:21.461 ************************************ 00:07:21.461 16:58:13 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:21.461 * Looking for test storage... 00:07:21.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:21.461 16:58:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:21.461 16:58:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60099 00:07:21.461 16:58:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:21.461 16:58:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60099 00:07:21.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.461 16:58:13 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 60099 ']' 00:07:21.461 16:58:13 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.461 16:58:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:21.461 16:58:13 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.461 16:58:13 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.461 16:58:13 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.461 16:58:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:21.461 [2024-07-25 16:58:13.842159] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:21.461 [2024-07-25 16:58:13.842227] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60099 ] 00:07:21.720 [2024-07-25 16:58:13.974316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.720 [2024-07-25 16:58:14.078444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.720 [2024-07-25 16:58:14.078626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.720 [2024-07-25 16:58:14.079792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.720 [2024-07-25 16:58:14.079792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.289 16:58:14 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.289 16:58:14 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:22.289 16:58:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:22.289 16:58:14 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.289 16:58:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:22.289 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:22.289 POWER: Cannot set governor of lcore 0 to userspace 00:07:22.289 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:22.289 POWER: Cannot set governor of lcore 0 to performance 00:07:22.289 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:22.289 POWER: Cannot set governor of lcore 0 to userspace 00:07:22.289 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:22.289 POWER: Cannot set governor of lcore 0 to userspace 00:07:22.290 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:22.290 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:22.290 POWER: Unable to set Power Management Environment for lcore 0 00:07:22.290 [2024-07-25 16:58:14.699827] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:22.290 [2024-07-25 16:58:14.699839] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:22.290 [2024-07-25 16:58:14.699847] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:22.290 [2024-07-25 16:58:14.699858] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:22.290 [2024-07-25 16:58:14.699864] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:22.290 [2024-07-25 16:58:14.699871] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:22.290 16:58:14 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.290 16:58:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:22.290 16:58:14 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.290 16:58:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:22.550 [2024-07-25 16:58:14.774844] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:22.550 16:58:14 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.550 16:58:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:22.550 16:58:14 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.550 16:58:14 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.550 16:58:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:22.550 ************************************ 00:07:22.550 START TEST scheduler_create_thread 00:07:22.550 ************************************ 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.550 2 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.550 3 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.550 4 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.550 5 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.550 6 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.550 7 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.550 8 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.550 9 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.550 10 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.550 16:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.928 16:58:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.928 16:58:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:23.928 16:58:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:23.928 16:58:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.928 16:58:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.304 16:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.304 00:07:25.304 real 0m2.605s 00:07:25.304 user 0m0.027s 00:07:25.304 sys 0m0.006s 00:07:25.304 16:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.304 16:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.304 ************************************ 00:07:25.304 END TEST scheduler_create_thread 00:07:25.304 ************************************ 00:07:25.304 16:58:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:25.304 16:58:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60099 00:07:25.304 16:58:17 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 60099 ']' 00:07:25.304 16:58:17 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 60099 00:07:25.304 16:58:17 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:25.304 16:58:17 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.304 16:58:17 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60099 00:07:25.304 16:58:17 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:25.304 16:58:17 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:25.304 killing process with pid 60099 00:07:25.304 16:58:17 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60099' 00:07:25.304 16:58:17 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 60099 00:07:25.304 16:58:17 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 60099 00:07:25.563 [2024-07-25 16:58:17.873265] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:25.822 00:07:25.822 real 0m4.431s 00:07:25.822 user 0m8.032s 00:07:25.822 sys 0m0.371s 00:07:25.822 16:58:18 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.822 16:58:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:25.822 ************************************ 00:07:25.822 END TEST event_scheduler 00:07:25.822 ************************************ 00:07:25.822 16:58:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:25.822 16:58:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:25.822 16:58:18 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.822 16:58:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.822 16:58:18 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.822 ************************************ 00:07:25.822 START TEST app_repeat 00:07:25.822 ************************************ 00:07:25.822 16:58:18 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:25.822 16:58:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.822 16:58:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.822 16:58:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:25.822 16:58:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:25.822 16:58:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:25.822 16:58:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:25.822 16:58:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:25.822 16:58:18 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:25.822 16:58:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60201 00:07:25.822 16:58:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:25.822 Process app_repeat pid: 60201 00:07:25.822 16:58:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60201' 00:07:25.822 16:58:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:25.822 spdk_app_start Round 0 00:07:25.823 16:58:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:25.823 16:58:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60201 /var/tmp/spdk-nbd.sock 00:07:25.823 16:58:18 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60201 ']' 00:07:25.823 16:58:18 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:25.823 16:58:18 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:25.823 16:58:18 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:25.823 16:58:18 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.823 16:58:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:25.823 [2024-07-25 16:58:18.178511] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:25.823 [2024-07-25 16:58:18.178594] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60201 ] 00:07:26.082 [2024-07-25 16:58:18.325357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.082 [2024-07-25 16:58:18.417378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.082 [2024-07-25 16:58:18.417379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.648 16:58:19 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.648 16:58:19 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:26.648 16:58:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:26.906 Malloc0 00:07:26.906 16:58:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:27.164 Malloc1 00:07:27.164 16:58:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:27.164 16:58:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.164 16:58:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:27.164 16:58:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:27.164 16:58:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.164 16:58:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:27.164 16:58:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:27.164 16:58:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.165 16:58:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:27.165 16:58:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:27.165 16:58:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.165 16:58:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:27.165 16:58:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:27.165 16:58:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:27.165 16:58:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:27.165 16:58:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:27.423 /dev/nbd0 00:07:27.423 16:58:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:27.423 16:58:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:27.423 16:58:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:27.423 16:58:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:27.423 16:58:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:27.423 16:58:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:27.423 16:58:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:27.423 16:58:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:27.423 16:58:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:27.423 16:58:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:27.423 16:58:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:27.423 1+0 records in 00:07:27.423 1+0 records out 00:07:27.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473818 s, 8.6 MB/s 00:07:27.423 16:58:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:27.423 16:58:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:27.423 16:58:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:27.423 16:58:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:27.423 16:58:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:27.423 16:58:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.423 16:58:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:27.423 16:58:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:27.682 /dev/nbd1 00:07:27.682 16:58:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:27.682 16:58:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:27.682 16:58:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:27.682 16:58:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:27.682 16:58:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:27.682 16:58:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:27.682 16:58:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:27.682 16:58:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:27.682 16:58:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:27.682 16:58:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:27.682 16:58:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:27.682 1+0 records in 00:07:27.682 1+0 records out 00:07:27.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319496 s, 12.8 MB/s 00:07:27.682 16:58:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:27.683 16:58:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:27.683 16:58:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:27.683 16:58:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:27.683 16:58:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:27.683 16:58:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.683 16:58:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:27.683 16:58:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:27.683 16:58:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.683 16:58:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:27.942 { 00:07:27.942 "nbd_device": "/dev/nbd0", 00:07:27.942 "bdev_name": "Malloc0" 00:07:27.942 }, 00:07:27.942 { 00:07:27.942 "nbd_device": "/dev/nbd1", 00:07:27.942 "bdev_name": "Malloc1" 00:07:27.942 } 00:07:27.942 ]' 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:27.942 { 00:07:27.942 "nbd_device": "/dev/nbd0", 00:07:27.942 "bdev_name": "Malloc0" 00:07:27.942 }, 00:07:27.942 { 00:07:27.942 "nbd_device": "/dev/nbd1", 00:07:27.942 "bdev_name": "Malloc1" 00:07:27.942 } 00:07:27.942 ]' 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:27.942 /dev/nbd1' 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:27.942 /dev/nbd1' 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:27.942 256+0 records in 00:07:27.942 256+0 records out 00:07:27.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00478788 s, 219 MB/s 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:27.942 256+0 records in 00:07:27.942 256+0 records out 00:07:27.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281542 s, 37.2 MB/s 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:27.942 256+0 records in 00:07:27.942 256+0 records out 00:07:27.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262576 s, 39.9 MB/s 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.942 16:58:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:28.201 16:58:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:28.201 16:58:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:28.201 16:58:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:28.201 16:58:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.201 16:58:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.201 16:58:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:28.201 16:58:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:28.201 16:58:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.201 16:58:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.201 16:58:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:28.461 16:58:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:28.461 16:58:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:28.461 16:58:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:28.461 16:58:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.461 16:58:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.461 16:58:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:28.461 16:58:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:28.461 16:58:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.461 16:58:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:28.461 16:58:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.461 16:58:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.721 16:58:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:28.721 16:58:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.721 16:58:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:28.721 16:58:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:28.721 16:58:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:28.721 16:58:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.721 16:58:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:28.721 16:58:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:28.721 16:58:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:28.721 16:58:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:28.721 16:58:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:28.721 16:58:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:28.721 16:58:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:28.979 16:58:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:29.238 [2024-07-25 16:58:21.529418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:29.238 [2024-07-25 16:58:21.622586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.238 [2024-07-25 16:58:21.622588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.238 [2024-07-25 16:58:21.664978] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:29.238 [2024-07-25 16:58:21.665030] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:32.528 spdk_app_start Round 1 00:07:32.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:32.528 16:58:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:32.528 16:58:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:32.528 16:58:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60201 /var/tmp/spdk-nbd.sock 00:07:32.528 16:58:24 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60201 ']' 00:07:32.528 16:58:24 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:32.528 16:58:24 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.528 16:58:24 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:32.528 16:58:24 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.528 16:58:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:32.528 16:58:24 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.528 16:58:24 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:32.528 16:58:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:32.528 Malloc0 00:07:32.528 16:58:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:32.528 Malloc1 00:07:32.840 16:58:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:32.840 16:58:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.840 16:58:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:32.840 16:58:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:32.840 16:58:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.840 16:58:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:32.840 16:58:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:32.840 16:58:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.840 16:58:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:32.840 16:58:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:32.840 16:58:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.840 16:58:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:32.840 16:58:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:32.840 16:58:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:32.840 16:58:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.840 16:58:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:32.840 /dev/nbd0 00:07:32.840 16:58:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:32.840 16:58:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:32.840 16:58:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:32.840 16:58:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:32.840 16:58:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:32.840 16:58:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:32.840 16:58:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:32.840 16:58:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:32.840 16:58:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:32.840 16:58:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:32.840 16:58:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:32.840 1+0 records in 00:07:32.840 1+0 records out 00:07:32.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411903 s, 9.9 MB/s 00:07:32.840 16:58:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.840 16:58:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:32.840 16:58:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.840 16:58:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:32.840 16:58:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:32.840 16:58:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.840 16:58:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.840 16:58:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:33.114 /dev/nbd1 00:07:33.114 16:58:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:33.114 16:58:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:33.114 16:58:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:33.114 16:58:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:33.114 16:58:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:33.114 16:58:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:33.114 16:58:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:33.114 16:58:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:33.114 16:58:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:33.114 16:58:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:33.115 16:58:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:33.115 1+0 records in 00:07:33.115 1+0 records out 00:07:33.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176315 s, 23.2 MB/s 00:07:33.115 16:58:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:33.115 16:58:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:33.115 16:58:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:33.115 16:58:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:33.115 16:58:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:33.115 16:58:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.115 16:58:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:33.115 16:58:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:33.115 16:58:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.115 16:58:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:33.374 { 00:07:33.374 "nbd_device": "/dev/nbd0", 00:07:33.374 "bdev_name": "Malloc0" 00:07:33.374 }, 00:07:33.374 { 00:07:33.374 "nbd_device": "/dev/nbd1", 00:07:33.374 "bdev_name": "Malloc1" 00:07:33.374 } 00:07:33.374 ]' 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:33.374 { 00:07:33.374 "nbd_device": "/dev/nbd0", 00:07:33.374 "bdev_name": "Malloc0" 00:07:33.374 }, 00:07:33.374 { 00:07:33.374 "nbd_device": "/dev/nbd1", 00:07:33.374 "bdev_name": "Malloc1" 00:07:33.374 } 00:07:33.374 ]' 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:33.374 /dev/nbd1' 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:33.374 /dev/nbd1' 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:33.374 256+0 records in 00:07:33.374 256+0 records out 00:07:33.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116381 s, 90.1 MB/s 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:33.374 256+0 records in 00:07:33.374 256+0 records out 00:07:33.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252037 s, 41.6 MB/s 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:33.374 256+0 records in 00:07:33.374 256+0 records out 00:07:33.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264855 s, 39.6 MB/s 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.374 16:58:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:33.632 16:58:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:33.633 16:58:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:33.633 16:58:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:33.633 16:58:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.633 16:58:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.633 16:58:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:33.633 16:58:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:33.633 16:58:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.633 16:58:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.633 16:58:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:33.890 16:58:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:33.890 16:58:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:33.890 16:58:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:33.890 16:58:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.890 16:58:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.890 16:58:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:33.890 16:58:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:33.890 16:58:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.890 16:58:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:33.890 16:58:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.890 16:58:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.148 16:58:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:34.148 16:58:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:34.148 16:58:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.148 16:58:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:34.148 16:58:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:34.148 16:58:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.148 16:58:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:34.148 16:58:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:34.148 16:58:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:34.148 16:58:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:34.148 16:58:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:34.148 16:58:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:34.148 16:58:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:34.406 16:58:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:34.665 [2024-07-25 16:58:26.898280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:34.665 [2024-07-25 16:58:26.998267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.665 [2024-07-25 16:58:26.998268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.665 [2024-07-25 16:58:27.041862] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:34.665 [2024-07-25 16:58:27.041910] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:37.960 spdk_app_start Round 2 00:07:37.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:37.960 16:58:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:37.960 16:58:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:37.960 16:58:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60201 /var/tmp/spdk-nbd.sock 00:07:37.960 16:58:29 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60201 ']' 00:07:37.960 16:58:29 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:37.960 16:58:29 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.960 16:58:29 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:37.960 16:58:29 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.960 16:58:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:37.960 16:58:29 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.960 16:58:29 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:37.960 16:58:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:37.960 Malloc0 00:07:37.960 16:58:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:37.960 Malloc1 00:07:37.960 16:58:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:37.960 16:58:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.960 16:58:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:37.960 16:58:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:37.960 16:58:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.960 16:58:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:37.960 16:58:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:37.960 16:58:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.960 16:58:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:37.960 16:58:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:37.960 16:58:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.960 16:58:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:37.960 16:58:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:37.960 16:58:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:37.960 16:58:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:37.960 16:58:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:38.219 /dev/nbd0 00:07:38.219 16:58:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:38.219 16:58:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:38.219 16:58:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:38.219 16:58:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:38.219 16:58:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:38.219 16:58:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:38.219 16:58:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:38.219 16:58:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:38.219 16:58:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:38.219 16:58:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:38.219 16:58:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:38.219 1+0 records in 00:07:38.219 1+0 records out 00:07:38.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281846 s, 14.5 MB/s 00:07:38.219 16:58:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.219 16:58:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:38.219 16:58:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.219 16:58:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:38.219 16:58:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:38.219 16:58:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.219 16:58:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:38.219 16:58:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:38.479 /dev/nbd1 00:07:38.479 16:58:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:38.479 16:58:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:38.479 16:58:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:38.479 16:58:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:38.479 16:58:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:38.479 16:58:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:38.479 16:58:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:38.479 16:58:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:38.479 16:58:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:38.479 16:58:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:38.479 16:58:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:38.479 1+0 records in 00:07:38.479 1+0 records out 00:07:38.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430301 s, 9.5 MB/s 00:07:38.479 16:58:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.479 16:58:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:38.479 16:58:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.479 16:58:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:38.479 16:58:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:38.479 16:58:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.479 16:58:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:38.479 16:58:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:38.479 16:58:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.479 16:58:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:38.738 16:58:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:38.738 { 00:07:38.738 "nbd_device": "/dev/nbd0", 00:07:38.738 "bdev_name": "Malloc0" 00:07:38.738 }, 00:07:38.738 { 00:07:38.738 "nbd_device": "/dev/nbd1", 00:07:38.738 "bdev_name": "Malloc1" 00:07:38.738 } 00:07:38.738 ]' 00:07:38.738 16:58:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:38.738 { 00:07:38.738 "nbd_device": "/dev/nbd0", 00:07:38.738 "bdev_name": "Malloc0" 00:07:38.738 }, 00:07:38.738 { 00:07:38.738 "nbd_device": "/dev/nbd1", 00:07:38.738 "bdev_name": "Malloc1" 00:07:38.738 } 00:07:38.738 ]' 00:07:38.738 16:58:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.738 16:58:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:38.738 /dev/nbd1' 00:07:38.738 16:58:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.738 16:58:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:38.738 /dev/nbd1' 00:07:38.738 16:58:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:38.738 16:58:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:38.738 16:58:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:38.739 16:58:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:38.739 16:58:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:38.739 16:58:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.739 16:58:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:38.739 16:58:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:38.739 16:58:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:38.739 16:58:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:38.739 16:58:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:38.739 256+0 records in 00:07:38.739 256+0 records out 00:07:38.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131262 s, 79.9 MB/s 00:07:38.739 16:58:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:38.739 16:58:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:38.998 256+0 records in 00:07:38.998 256+0 records out 00:07:38.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243947 s, 43.0 MB/s 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:38.998 256+0 records in 00:07:38.998 256+0 records out 00:07:38.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253308 s, 41.4 MB/s 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.998 16:58:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.258 16:58:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:39.517 16:58:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:39.517 16:58:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:39.517 16:58:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.517 16:58:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:39.517 16:58:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:39.517 16:58:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.517 16:58:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:39.517 16:58:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:39.517 16:58:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:39.517 16:58:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:39.517 16:58:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:39.517 16:58:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:39.517 16:58:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:39.776 16:58:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:40.036 [2024-07-25 16:58:32.358423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.036 [2024-07-25 16:58:32.427225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.036 [2024-07-25 16:58:32.427228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.036 [2024-07-25 16:58:32.469000] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:40.036 [2024-07-25 16:58:32.469049] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:43.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:43.328 16:58:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60201 /var/tmp/spdk-nbd.sock 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60201 ']' 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:43.328 16:58:35 event.app_repeat -- event/event.sh@39 -- # killprocess 60201 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 60201 ']' 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 60201 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60201 00:07:43.328 killing process with pid 60201 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60201' 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@969 -- # kill 60201 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@974 -- # wait 60201 00:07:43.328 spdk_app_start is called in Round 0. 00:07:43.328 Shutdown signal received, stop current app iteration 00:07:43.328 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:43.328 spdk_app_start is called in Round 1. 00:07:43.328 Shutdown signal received, stop current app iteration 00:07:43.328 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:43.328 spdk_app_start is called in Round 2. 00:07:43.328 Shutdown signal received, stop current app iteration 00:07:43.328 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:43.328 spdk_app_start is called in Round 3. 00:07:43.328 Shutdown signal received, stop current app iteration 00:07:43.328 16:58:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:43.328 16:58:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:43.328 ************************************ 00:07:43.328 END TEST app_repeat 00:07:43.328 ************************************ 00:07:43.328 00:07:43.328 real 0m17.458s 00:07:43.328 user 0m37.972s 00:07:43.328 sys 0m3.029s 00:07:43.328 16:58:35 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.329 16:58:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:43.329 16:58:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:43.329 16:58:35 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:43.329 16:58:35 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.329 16:58:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.329 16:58:35 event -- common/autotest_common.sh@10 -- # set +x 00:07:43.329 ************************************ 00:07:43.329 START TEST cpu_locks 00:07:43.329 ************************************ 00:07:43.329 16:58:35 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:43.599 * Looking for test storage... 00:07:43.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:43.599 16:58:35 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:43.599 16:58:35 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:43.599 16:58:35 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:43.599 16:58:35 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:43.599 16:58:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.599 16:58:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.599 16:58:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.599 ************************************ 00:07:43.599 START TEST default_locks 00:07:43.599 ************************************ 00:07:43.599 16:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:43.599 16:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60612 00:07:43.599 16:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:43.599 16:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60612 00:07:43.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.599 16:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60612 ']' 00:07:43.599 16:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.599 16:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.599 16:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.599 16:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.599 16:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.600 [2024-07-25 16:58:35.900388] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:43.600 [2024-07-25 16:58:35.900461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60612 ] 00:07:43.600 [2024-07-25 16:58:36.042739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.859 [2024-07-25 16:58:36.138735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.428 16:58:36 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.428 16:58:36 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:44.428 16:58:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60612 00:07:44.428 16:58:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:44.428 16:58:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60612 00:07:44.995 16:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60612 00:07:44.995 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60612 ']' 00:07:44.995 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60612 00:07:44.995 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:44.995 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.995 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60612 00:07:44.995 killing process with pid 60612 00:07:44.995 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.995 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.995 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60612' 00:07:44.995 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60612 00:07:44.995 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60612 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60612 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60612 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60612 00:07:45.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.255 ERROR: process (pid: 60612) is no longer running 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60612 ']' 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.255 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60612) - No such process 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:45.255 00:07:45.255 real 0m1.783s 00:07:45.255 user 0m1.833s 00:07:45.255 sys 0m0.592s 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.255 ************************************ 00:07:45.255 END TEST default_locks 00:07:45.255 ************************************ 00:07:45.255 16:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.255 16:58:37 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:45.255 16:58:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:45.255 16:58:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.255 16:58:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.255 ************************************ 00:07:45.255 START TEST default_locks_via_rpc 00:07:45.255 ************************************ 00:07:45.255 16:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:45.255 16:58:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60664 00:07:45.255 16:58:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:45.255 16:58:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60664 00:07:45.255 16:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60664 ']' 00:07:45.255 16:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.255 16:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.255 16:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.255 16:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.255 16:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.513 [2024-07-25 16:58:37.748242] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:45.513 [2024-07-25 16:58:37.748311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60664 ] 00:07:45.513 [2024-07-25 16:58:37.887363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.513 [2024-07-25 16:58:37.975977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60664 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60664 00:07:46.450 16:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:46.709 16:58:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60664 00:07:46.709 16:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60664 ']' 00:07:46.709 16:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60664 00:07:46.709 16:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:46.709 16:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.709 16:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60664 00:07:46.709 killing process with pid 60664 00:07:46.709 16:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.709 16:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.709 16:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60664' 00:07:46.709 16:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60664 00:07:46.709 16:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60664 00:07:47.277 00:07:47.277 real 0m1.787s 00:07:47.277 user 0m1.878s 00:07:47.277 sys 0m0.552s 00:07:47.277 16:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.277 16:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.277 ************************************ 00:07:47.277 END TEST default_locks_via_rpc 00:07:47.277 ************************************ 00:07:47.277 16:58:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:47.277 16:58:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.277 16:58:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.277 16:58:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:47.277 ************************************ 00:07:47.277 START TEST non_locking_app_on_locked_coremask 00:07:47.277 ************************************ 00:07:47.277 16:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:47.277 16:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60715 00:07:47.277 16:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60715 /var/tmp/spdk.sock 00:07:47.277 16:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:47.277 16:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60715 ']' 00:07:47.277 16:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.277 16:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.277 16:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.277 16:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.277 16:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.277 [2024-07-25 16:58:39.615454] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:47.277 [2024-07-25 16:58:39.615546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60715 ] 00:07:47.535 [2024-07-25 16:58:39.758628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.535 [2024-07-25 16:58:39.844197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.119 16:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.119 16:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:48.119 16:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60725 00:07:48.119 16:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:48.119 16:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60725 /var/tmp/spdk2.sock 00:07:48.119 16:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60725 ']' 00:07:48.119 16:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:48.119 16:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:48.119 16:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:48.119 16:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.119 16:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.119 [2024-07-25 16:58:40.513937] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:48.119 [2024-07-25 16:58:40.514006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60725 ] 00:07:48.400 [2024-07-25 16:58:40.651709] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:48.400 [2024-07-25 16:58:40.651759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.400 [2024-07-25 16:58:40.842097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.972 16:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.972 16:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:48.972 16:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60715 00:07:48.972 16:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:48.972 16:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60715 00:07:49.908 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60715 00:07:49.908 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60715 ']' 00:07:49.908 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60715 00:07:49.908 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:49.908 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.908 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60715 00:07:49.908 killing process with pid 60715 00:07:49.908 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.908 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.908 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60715' 00:07:49.908 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60715 00:07:49.908 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60715 00:07:50.476 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60725 00:07:50.476 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60725 ']' 00:07:50.476 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60725 00:07:50.476 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:50.476 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:50.476 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60725 00:07:50.476 killing process with pid 60725 00:07:50.476 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:50.476 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:50.476 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60725' 00:07:50.476 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60725 00:07:50.476 16:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60725 00:07:50.735 00:07:50.735 real 0m3.645s 00:07:50.735 user 0m3.948s 00:07:50.735 sys 0m1.049s 00:07:50.735 16:58:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.735 16:58:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.735 ************************************ 00:07:50.735 END TEST non_locking_app_on_locked_coremask 00:07:50.735 ************************************ 00:07:50.995 16:58:43 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:50.995 16:58:43 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.995 16:58:43 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.995 16:58:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.995 ************************************ 00:07:50.995 START TEST locking_app_on_unlocked_coremask 00:07:50.995 ************************************ 00:07:50.995 16:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:50.995 16:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60787 00:07:50.995 16:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:50.995 16:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60787 /var/tmp/spdk.sock 00:07:50.995 16:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60787 ']' 00:07:50.995 16:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.995 16:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.995 16:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.995 16:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.995 16:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.995 [2024-07-25 16:58:43.326011] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:50.995 [2024-07-25 16:58:43.326607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60787 ] 00:07:50.995 [2024-07-25 16:58:43.454538] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:50.995 [2024-07-25 16:58:43.454577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.254 [2024-07-25 16:58:43.539280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.822 16:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.822 16:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:51.822 16:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:51.822 16:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60803 00:07:51.822 16:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60803 /var/tmp/spdk2.sock 00:07:51.822 16:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60803 ']' 00:07:51.822 16:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:51.822 16:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.822 16:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:51.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:51.822 16:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.822 16:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.822 [2024-07-25 16:58:44.245323] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:51.822 [2024-07-25 16:58:44.245805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60803 ] 00:07:52.080 [2024-07-25 16:58:44.380744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.339 [2024-07-25 16:58:44.578996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.907 16:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.907 16:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:52.907 16:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60803 00:07:52.907 16:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60803 00:07:52.907 16:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:53.848 16:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60787 00:07:53.848 16:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60787 ']' 00:07:53.848 16:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60787 00:07:53.848 16:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:53.848 16:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.848 16:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60787 00:07:53.848 killing process with pid 60787 00:07:53.848 16:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.848 16:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.848 16:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60787' 00:07:53.848 16:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60787 00:07:53.848 16:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60787 00:07:54.417 16:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60803 00:07:54.417 16:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60803 ']' 00:07:54.417 16:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60803 00:07:54.417 16:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:54.417 16:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.417 16:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60803 00:07:54.417 16:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:54.418 16:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:54.418 16:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60803' 00:07:54.418 killing process with pid 60803 00:07:54.418 16:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60803 00:07:54.418 16:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60803 00:07:54.675 00:07:54.675 real 0m3.719s 00:07:54.675 user 0m4.071s 00:07:54.675 sys 0m1.022s 00:07:54.675 16:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.675 ************************************ 00:07:54.675 END TEST locking_app_on_unlocked_coremask 00:07:54.675 ************************************ 00:07:54.675 16:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.675 16:58:47 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:54.675 16:58:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.675 16:58:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.675 16:58:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:54.675 ************************************ 00:07:54.675 START TEST locking_app_on_locked_coremask 00:07:54.675 ************************************ 00:07:54.675 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:54.676 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60870 00:07:54.676 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60870 /var/tmp/spdk.sock 00:07:54.676 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:54.676 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60870 ']' 00:07:54.676 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.676 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.676 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.676 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.676 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.676 [2024-07-25 16:58:47.123141] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:54.676 [2024-07-25 16:58:47.123230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60870 ] 00:07:54.934 [2024-07-25 16:58:47.264495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.934 [2024-07-25 16:58:47.342408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60886 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60886 /var/tmp/spdk2.sock 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60886 /var/tmp/spdk2.sock 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60886 /var/tmp/spdk2.sock 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60886 ']' 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:55.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.501 16:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.759 [2024-07-25 16:58:48.003946] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:55.759 [2024-07-25 16:58:48.004316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60886 ] 00:07:55.759 [2024-07-25 16:58:48.140536] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60870 has claimed it. 00:07:55.759 [2024-07-25 16:58:48.140588] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:56.328 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60886) - No such process 00:07:56.328 ERROR: process (pid: 60886) is no longer running 00:07:56.328 16:58:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.328 16:58:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:56.328 16:58:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:56.328 16:58:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:56.328 16:58:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:56.328 16:58:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:56.328 16:58:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60870 00:07:56.328 16:58:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60870 00:07:56.328 16:58:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:56.897 16:58:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60870 00:07:56.897 16:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60870 ']' 00:07:56.897 16:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60870 00:07:56.897 16:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:56.897 16:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.897 16:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60870 00:07:56.897 killing process with pid 60870 00:07:56.897 16:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.897 16:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.897 16:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60870' 00:07:56.897 16:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60870 00:07:56.897 16:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60870 00:07:57.156 00:07:57.156 real 0m2.394s 00:07:57.156 user 0m2.650s 00:07:57.156 sys 0m0.609s 00:07:57.156 16:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.156 16:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.156 ************************************ 00:07:57.156 END TEST locking_app_on_locked_coremask 00:07:57.156 ************************************ 00:07:57.156 16:58:49 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:57.156 16:58:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.156 16:58:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.156 16:58:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.156 ************************************ 00:07:57.156 START TEST locking_overlapped_coremask 00:07:57.156 ************************************ 00:07:57.156 16:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:57.156 16:58:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60926 00:07:57.156 16:58:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:57.156 16:58:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60926 /var/tmp/spdk.sock 00:07:57.156 16:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60926 ']' 00:07:57.156 16:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.156 16:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.156 16:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.156 16:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.156 16:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.156 [2024-07-25 16:58:49.587928] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:57.156 [2024-07-25 16:58:49.588008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60926 ] 00:07:57.416 [2024-07-25 16:58:49.759175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.416 [2024-07-25 16:58:49.866368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.416 [2024-07-25 16:58:49.866470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.416 [2024-07-25 16:58:49.866472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60944 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60944 /var/tmp/spdk2.sock 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60944 /var/tmp/spdk2.sock 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60944 /var/tmp/spdk2.sock 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60944 ']' 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:57.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.983 16:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.242 [2024-07-25 16:58:50.465269] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:58.242 [2024-07-25 16:58:50.465498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60944 ] 00:07:58.242 [2024-07-25 16:58:50.605656] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60926 has claimed it. 00:07:58.242 [2024-07-25 16:58:50.605706] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:58.808 ERROR: process (pid: 60944) is no longer running 00:07:58.808 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60944) - No such process 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60926 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 60926 ']' 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 60926 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60926 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60926' 00:07:58.808 killing process with pid 60926 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 60926 00:07:58.808 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 60926 00:07:59.118 00:07:59.118 real 0m1.947s 00:07:59.118 user 0m5.129s 00:07:59.118 sys 0m0.369s 00:07:59.118 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.118 16:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.118 ************************************ 00:07:59.118 END TEST locking_overlapped_coremask 00:07:59.118 ************************************ 00:07:59.118 16:58:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:59.118 16:58:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.118 16:58:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.118 16:58:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.118 ************************************ 00:07:59.118 START TEST locking_overlapped_coremask_via_rpc 00:07:59.118 ************************************ 00:07:59.118 16:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:59.118 16:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60984 00:07:59.118 16:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60984 /var/tmp/spdk.sock 00:07:59.118 16:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:59.118 16:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60984 ']' 00:07:59.118 16:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.118 16:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.118 16:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.118 16:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.118 16:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.377 [2024-07-25 16:58:51.615948] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:59.377 [2024-07-25 16:58:51.616030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60984 ] 00:07:59.377 [2024-07-25 16:58:51.756235] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:59.377 [2024-07-25 16:58:51.756296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.377 [2024-07-25 16:58:51.837245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.377 [2024-07-25 16:58:51.837328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.377 [2024-07-25 16:58:51.837329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.314 16:58:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.314 16:58:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:00.314 16:58:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61002 00:08:00.314 16:58:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61002 /var/tmp/spdk2.sock 00:08:00.314 16:58:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:00.314 16:58:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61002 ']' 00:08:00.314 16:58:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:00.314 16:58:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.314 16:58:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:00.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:00.314 16:58:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.314 16:58:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.314 [2024-07-25 16:58:52.514374] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:00.314 [2024-07-25 16:58:52.514650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61002 ] 00:08:00.314 [2024-07-25 16:58:52.650190] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:00.314 [2024-07-25 16:58:52.650241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.572 [2024-07-25 16:58:52.835595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.572 [2024-07-25 16:58:52.841026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.572 [2024-07-25 16:58:52.841030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.140 [2024-07-25 16:58:53.383185] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60984 has claimed it. 00:08:01.140 request: 00:08:01.140 { 00:08:01.140 "method": "framework_enable_cpumask_locks", 00:08:01.140 "req_id": 1 00:08:01.140 } 00:08:01.140 Got JSON-RPC error response 00:08:01.140 response: 00:08:01.140 { 00:08:01.140 "code": -32603, 00:08:01.140 "message": "Failed to claim CPU core: 2" 00:08:01.140 } 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60984 /var/tmp/spdk.sock 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60984 ']' 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61002 /var/tmp/spdk2.sock 00:08:01.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61002 ']' 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.140 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.141 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.141 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.141 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.400 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.400 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:01.400 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:01.400 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:01.400 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:01.400 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:01.400 ************************************ 00:08:01.400 END TEST locking_overlapped_coremask_via_rpc 00:08:01.400 ************************************ 00:08:01.400 00:08:01.400 real 0m2.270s 00:08:01.400 user 0m0.983s 00:08:01.400 sys 0m0.210s 00:08:01.400 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.400 16:58:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.659 16:58:53 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:01.659 16:58:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60984 ]] 00:08:01.659 16:58:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60984 00:08:01.659 16:58:53 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60984 ']' 00:08:01.659 16:58:53 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60984 00:08:01.659 16:58:53 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:01.659 16:58:53 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.659 16:58:53 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60984 00:08:01.659 16:58:53 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:01.659 killing process with pid 60984 00:08:01.659 16:58:53 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:01.659 16:58:53 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60984' 00:08:01.659 16:58:53 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60984 00:08:01.659 16:58:53 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60984 00:08:01.917 16:58:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61002 ]] 00:08:01.917 16:58:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61002 00:08:01.917 16:58:54 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61002 ']' 00:08:01.917 16:58:54 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61002 00:08:01.917 16:58:54 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:01.917 16:58:54 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.917 16:58:54 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61002 00:08:01.917 killing process with pid 61002 00:08:01.917 16:58:54 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:01.917 16:58:54 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:01.917 16:58:54 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61002' 00:08:01.917 16:58:54 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61002 00:08:01.917 16:58:54 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61002 00:08:02.175 16:58:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:02.175 Process with pid 60984 is not found 00:08:02.175 Process with pid 61002 is not found 00:08:02.175 16:58:54 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:02.175 16:58:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60984 ]] 00:08:02.175 16:58:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60984 00:08:02.175 16:58:54 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60984 ']' 00:08:02.175 16:58:54 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60984 00:08:02.175 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60984) - No such process 00:08:02.175 16:58:54 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60984 is not found' 00:08:02.175 16:58:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61002 ]] 00:08:02.175 16:58:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61002 00:08:02.175 16:58:54 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61002 ']' 00:08:02.175 16:58:54 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61002 00:08:02.175 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61002) - No such process 00:08:02.175 16:58:54 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61002 is not found' 00:08:02.175 16:58:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:02.175 00:08:02.175 real 0m18.929s 00:08:02.175 user 0m31.305s 00:08:02.175 sys 0m5.327s 00:08:02.175 16:58:54 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.175 16:58:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:02.175 ************************************ 00:08:02.175 END TEST cpu_locks 00:08:02.175 ************************************ 00:08:02.434 00:08:02.434 real 0m45.372s 00:08:02.434 user 1m23.969s 00:08:02.434 sys 0m9.272s 00:08:02.434 16:58:54 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.434 16:58:54 event -- common/autotest_common.sh@10 -- # set +x 00:08:02.434 ************************************ 00:08:02.434 END TEST event 00:08:02.434 ************************************ 00:08:02.434 16:58:54 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:02.434 16:58:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.434 16:58:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.434 16:58:54 -- common/autotest_common.sh@10 -- # set +x 00:08:02.434 ************************************ 00:08:02.434 START TEST thread 00:08:02.434 ************************************ 00:08:02.434 16:58:54 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:02.434 * Looking for test storage... 00:08:02.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:02.434 16:58:54 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:02.434 16:58:54 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:02.434 16:58:54 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.434 16:58:54 thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.434 ************************************ 00:08:02.434 START TEST thread_poller_perf 00:08:02.434 ************************************ 00:08:02.434 16:58:54 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:02.692 [2024-07-25 16:58:54.908858] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:02.693 [2024-07-25 16:58:54.908965] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61119 ] 00:08:02.693 [2024-07-25 16:58:55.053974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.693 [2024-07-25 16:58:55.128454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.693 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:04.072 ====================================== 00:08:04.072 busy:2495581700 (cyc) 00:08:04.072 total_run_count: 407000 00:08:04.072 tsc_hz: 2490000000 (cyc) 00:08:04.072 ====================================== 00:08:04.072 poller_cost: 6131 (cyc), 2462 (nsec) 00:08:04.072 00:08:04.072 real 0m1.321s 00:08:04.072 user 0m1.160s 00:08:04.072 sys 0m0.055s 00:08:04.072 16:58:56 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.072 16:58:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:04.072 ************************************ 00:08:04.072 END TEST thread_poller_perf 00:08:04.072 ************************************ 00:08:04.072 16:58:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:04.072 16:58:56 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:04.072 16:58:56 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.072 16:58:56 thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.072 ************************************ 00:08:04.072 START TEST thread_poller_perf 00:08:04.072 ************************************ 00:08:04.072 16:58:56 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:04.072 [2024-07-25 16:58:56.289555] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:04.072 [2024-07-25 16:58:56.289669] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61160 ] 00:08:04.072 [2024-07-25 16:58:56.433750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.072 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:04.072 [2024-07-25 16:58:56.514163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.448 ====================================== 00:08:05.448 busy:2491944130 (cyc) 00:08:05.448 total_run_count: 5241000 00:08:05.448 tsc_hz: 2490000000 (cyc) 00:08:05.448 ====================================== 00:08:05.448 poller_cost: 475 (cyc), 190 (nsec) 00:08:05.448 ************************************ 00:08:05.448 END TEST thread_poller_perf 00:08:05.448 ************************************ 00:08:05.448 00:08:05.448 real 0m1.320s 00:08:05.448 user 0m1.159s 00:08:05.448 sys 0m0.054s 00:08:05.448 16:58:57 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.448 16:58:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 16:58:57 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:05.448 00:08:05.448 real 0m2.895s 00:08:05.448 user 0m2.412s 00:08:05.448 sys 0m0.275s 00:08:05.448 ************************************ 00:08:05.448 END TEST thread 00:08:05.448 ************************************ 00:08:05.448 16:58:57 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.448 16:58:57 thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 16:58:57 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:08:05.448 16:58:57 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:05.448 16:58:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.448 16:58:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.448 16:58:57 -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 ************************************ 00:08:05.448 START TEST app_cmdline 00:08:05.448 ************************************ 00:08:05.448 16:58:57 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:05.448 * Looking for test storage... 00:08:05.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:05.448 16:58:57 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:05.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.448 16:58:57 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61229 00:08:05.448 16:58:57 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:05.448 16:58:57 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61229 00:08:05.448 16:58:57 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61229 ']' 00:08:05.448 16:58:57 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.448 16:58:57 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.448 16:58:57 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.448 16:58:57 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.448 16:58:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 [2024-07-25 16:58:57.891236] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:05.448 [2024-07-25 16:58:57.891313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61229 ] 00:08:05.706 [2024-07-25 16:58:58.030730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.706 [2024-07-25 16:58:58.113057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.273 16:58:58 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.273 16:58:58 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:06.273 16:58:58 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:06.533 { 00:08:06.533 "version": "SPDK v24.09-pre git sha1 704257090", 00:08:06.533 "fields": { 00:08:06.533 "major": 24, 00:08:06.533 "minor": 9, 00:08:06.533 "patch": 0, 00:08:06.533 "suffix": "-pre", 00:08:06.533 "commit": "704257090" 00:08:06.533 } 00:08:06.533 } 00:08:06.533 16:58:58 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:06.533 16:58:58 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:06.533 16:58:58 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:06.533 16:58:58 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:06.533 16:58:58 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:06.533 16:58:58 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:06.533 16:58:58 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:06.533 16:58:58 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.533 16:58:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:06.533 16:58:58 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.533 16:58:58 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:06.533 16:58:58 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:06.533 16:58:58 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:06.533 16:58:58 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:06.533 16:58:58 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:06.533 16:58:58 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:06.533 16:58:58 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.533 16:58:58 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:06.533 16:58:58 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.533 16:58:58 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:06.533 16:58:58 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.533 16:58:58 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:06.533 16:58:58 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:06.533 16:58:58 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:06.792 request: 00:08:06.792 { 00:08:06.792 "method": "env_dpdk_get_mem_stats", 00:08:06.792 "req_id": 1 00:08:06.792 } 00:08:06.792 Got JSON-RPC error response 00:08:06.792 response: 00:08:06.792 { 00:08:06.792 "code": -32601, 00:08:06.792 "message": "Method not found" 00:08:06.792 } 00:08:06.792 16:58:59 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:06.792 16:58:59 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:06.792 16:58:59 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:06.792 16:58:59 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:06.792 16:58:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61229 00:08:06.792 16:58:59 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61229 ']' 00:08:06.792 16:58:59 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61229 00:08:06.792 16:58:59 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:06.792 16:58:59 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.792 16:58:59 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61229 00:08:06.792 killing process with pid 61229 00:08:06.792 16:58:59 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:06.792 16:58:59 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:06.792 16:58:59 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61229' 00:08:06.792 16:58:59 app_cmdline -- common/autotest_common.sh@969 -- # kill 61229 00:08:06.792 16:58:59 app_cmdline -- common/autotest_common.sh@974 -- # wait 61229 00:08:07.051 00:08:07.051 real 0m1.771s 00:08:07.051 user 0m2.043s 00:08:07.051 sys 0m0.448s 00:08:07.051 ************************************ 00:08:07.051 END TEST app_cmdline 00:08:07.051 ************************************ 00:08:07.051 16:58:59 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.051 16:58:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:07.310 16:58:59 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:07.310 16:58:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.310 16:58:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.310 16:58:59 -- common/autotest_common.sh@10 -- # set +x 00:08:07.310 ************************************ 00:08:07.310 START TEST version 00:08:07.310 ************************************ 00:08:07.310 16:58:59 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:07.310 * Looking for test storage... 00:08:07.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:07.310 16:58:59 version -- app/version.sh@17 -- # get_header_version major 00:08:07.310 16:58:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:07.310 16:58:59 version -- app/version.sh@14 -- # cut -f2 00:08:07.310 16:58:59 version -- app/version.sh@14 -- # tr -d '"' 00:08:07.310 16:58:59 version -- app/version.sh@17 -- # major=24 00:08:07.310 16:58:59 version -- app/version.sh@18 -- # get_header_version minor 00:08:07.310 16:58:59 version -- app/version.sh@14 -- # cut -f2 00:08:07.310 16:58:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:07.310 16:58:59 version -- app/version.sh@14 -- # tr -d '"' 00:08:07.310 16:58:59 version -- app/version.sh@18 -- # minor=9 00:08:07.310 16:58:59 version -- app/version.sh@19 -- # get_header_version patch 00:08:07.310 16:58:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:07.310 16:58:59 version -- app/version.sh@14 -- # cut -f2 00:08:07.310 16:58:59 version -- app/version.sh@14 -- # tr -d '"' 00:08:07.310 16:58:59 version -- app/version.sh@19 -- # patch=0 00:08:07.310 16:58:59 version -- app/version.sh@20 -- # get_header_version suffix 00:08:07.310 16:58:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:07.310 16:58:59 version -- app/version.sh@14 -- # cut -f2 00:08:07.310 16:58:59 version -- app/version.sh@14 -- # tr -d '"' 00:08:07.310 16:58:59 version -- app/version.sh@20 -- # suffix=-pre 00:08:07.310 16:58:59 version -- app/version.sh@22 -- # version=24.9 00:08:07.310 16:58:59 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:07.310 16:58:59 version -- app/version.sh@28 -- # version=24.9rc0 00:08:07.310 16:58:59 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:07.311 16:58:59 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:07.311 16:58:59 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:07.311 16:58:59 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:07.311 00:08:07.311 real 0m0.210s 00:08:07.311 user 0m0.106s 00:08:07.311 sys 0m0.155s 00:08:07.311 ************************************ 00:08:07.311 END TEST version 00:08:07.311 ************************************ 00:08:07.311 16:58:59 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.311 16:58:59 version -- common/autotest_common.sh@10 -- # set +x 00:08:07.570 16:58:59 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:08:07.570 16:58:59 -- spdk/autotest.sh@202 -- # uname -s 00:08:07.570 16:58:59 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:08:07.570 16:58:59 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:07.570 16:58:59 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:07.570 16:58:59 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:08:07.570 16:58:59 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:08:07.570 16:58:59 -- spdk/autotest.sh@264 -- # timing_exit lib 00:08:07.570 16:58:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:07.570 16:58:59 -- common/autotest_common.sh@10 -- # set +x 00:08:07.570 16:58:59 -- spdk/autotest.sh@266 -- # '[' 1 -eq 1 ']' 00:08:07.570 16:58:59 -- spdk/autotest.sh@267 -- # run_test iscsi_tgt /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:08:07.570 16:58:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.570 16:58:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.570 16:58:59 -- common/autotest_common.sh@10 -- # set +x 00:08:07.570 ************************************ 00:08:07.570 START TEST iscsi_tgt 00:08:07.570 ************************************ 00:08:07.570 16:58:59 iscsi_tgt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:08:07.570 * Looking for test storage... 00:08:07.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # uname -s 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:07.570 16:59:00 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@18 -- # iscsicleanup 00:08:07.570 Cleaning up iSCSI connection 00:08:07.570 16:59:00 iscsi_tgt -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:08:07.570 16:59:00 iscsi_tgt -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:08:07.829 iscsiadm: No matching sessions found 00:08:07.829 16:59:00 iscsi_tgt -- common/autotest_common.sh@983 -- # true 00:08:07.829 16:59:00 iscsi_tgt -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:08:07.829 iscsiadm: No records found 00:08:07.829 16:59:00 iscsi_tgt -- common/autotest_common.sh@984 -- # true 00:08:07.829 16:59:00 iscsi_tgt -- common/autotest_common.sh@985 -- # rm -rf 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@21 -- # create_veth_interfaces 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # ip link set init_br nomaster 00:08:07.829 Cannot find device "init_br" 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # true 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # ip link set tgt_br nomaster 00:08:07.829 Cannot find device "tgt_br" 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # true 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # ip link set tgt_br2 nomaster 00:08:07.829 Cannot find device "tgt_br2" 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # true 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # ip link set init_br down 00:08:07.829 Cannot find device "init_br" 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # true 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # ip link set tgt_br down 00:08:07.829 Cannot find device "tgt_br" 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # true 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # ip link set tgt_br2 down 00:08:07.829 Cannot find device "tgt_br2" 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # true 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # ip link delete iscsi_br type bridge 00:08:07.829 Cannot find device "iscsi_br" 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # true 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # ip link delete spdk_init_int 00:08:07.829 Cannot find device "spdk_init_int" 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # true 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:08:07.829 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # true 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:08:07.829 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # true 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # ip netns del spdk_iscsi_ns 00:08:07.829 Cannot remove namespace file "/var/run/netns/spdk_iscsi_ns": No such file or directory 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # true 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@44 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@47 -- # ip netns add spdk_iscsi_ns 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@50 -- # ip link add spdk_init_int type veth peer name init_br 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@51 -- # ip link add spdk_tgt_int type veth peer name tgt_br 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@52 -- # ip link add spdk_tgt_int2 type veth peer name tgt_br2 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@55 -- # ip link set spdk_tgt_int netns spdk_iscsi_ns 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@56 -- # ip link set spdk_tgt_int2 netns spdk_iscsi_ns 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@59 -- # ip addr add 10.0.0.2/24 dev spdk_init_int 00:08:07.829 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@60 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.1/24 dev spdk_tgt_int 00:08:08.088 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@61 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.3/24 dev spdk_tgt_int2 00:08:08.088 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@64 -- # ip link set spdk_init_int up 00:08:08.088 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@65 -- # ip link set init_br up 00:08:08.088 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@66 -- # ip link set tgt_br up 00:08:08.088 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@67 -- # ip link set tgt_br2 up 00:08:08.088 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@68 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int up 00:08:08.088 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@69 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int2 up 00:08:08.088 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@70 -- # ip netns exec spdk_iscsi_ns ip link set lo up 00:08:08.088 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@73 -- # ip link add iscsi_br type bridge 00:08:08.088 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@74 -- # ip link set iscsi_br up 00:08:08.088 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@77 -- # ip link set init_br master iscsi_br 00:08:08.088 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@78 -- # ip link set tgt_br master iscsi_br 00:08:08.088 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@79 -- # ip link set tgt_br2 master iscsi_br 00:08:08.088 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@82 -- # iptables -I INPUT 1 -i spdk_init_int -p tcp --dport 3260 -j ACCEPT 00:08:08.088 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@83 -- # iptables -A FORWARD -i iscsi_br -o iscsi_br -j ACCEPT 00:08:08.088 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@86 -- # ping -c 1 10.0.0.1 00:08:08.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:08:08.088 00:08:08.089 --- 10.0.0.1 ping statistics --- 00:08:08.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.089 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:08:08.089 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@87 -- # ping -c 1 10.0.0.3 00:08:08.089 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:08.089 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:08:08.089 00:08:08.089 --- 10.0.0.3 ping statistics --- 00:08:08.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.089 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:08:08.089 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@88 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:08:08.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:08:08.089 00:08:08.089 --- 10.0.0.2 ping statistics --- 00:08:08.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.089 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:08.089 16:59:00 iscsi_tgt -- iscsi_tgt/common.sh@89 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:08:08.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:08:08.089 00:08:08.089 --- 10.0.0.2 ping statistics --- 00:08:08.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.089 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:08.089 16:59:00 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@23 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:08:08.089 16:59:00 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@25 -- # run_test iscsi_tgt_sock /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:08:08.089 16:59:00 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.089 16:59:00 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.089 16:59:00 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:08:08.089 ************************************ 00:08:08.089 START TEST iscsi_tgt_sock 00:08:08.089 ************************************ 00:08:08.089 16:59:00 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:08:08.347 * Looking for test storage... 00:08:08.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:08.347 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@48 -- # iscsitestinit 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@50 -- # HELLO_SOCK_APP='ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/examples/hello_sock' 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@51 -- # SOCAT_APP=socat 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@52 -- # OPENSSL_APP=openssl 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@53 -- # PSK='-N ssl -E 1234567890ABCDEF -I psk.spdk.io' 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@58 -- # timing_enter sock_client 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:08:08.348 Testing client path 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@59 -- # echo 'Testing client path' 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@63 -- # server_pid=61540 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@64 -- # trap 'killprocess $server_pid;iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@62 -- # socat tcp-l:3260,fork,bind=10.0.0.2 exec:/bin/cat 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@66 -- # waitfortcp 61540 10.0.0.2:3260 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@25 -- # local addr=10.0.0.2:3260 00:08:08.348 Waiting for process to start up and listen on address 10.0.0.2:3260... 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@27 -- # echo 'Waiting for process to start up and listen on address 10.0.0.2:3260...' 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@29 -- # xtrace_disable 00:08:08.348 16:59:00 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:08:08.944 [2024-07-25 16:59:01.190041] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:08.944 [2024-07-25 16:59:01.190146] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61550 ] 00:08:08.944 [2024-07-25 16:59:01.332365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.201 [2024-07-25 16:59:01.440764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.201 [2024-07-25 16:59:01.440832] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:09.201 [2024-07-25 16:59:01.440862] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:08:09.201 [2024-07-25 16:59:01.441045] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 46782) 00:08:09.201 [2024-07-25 16:59:01.441141] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:10.138 [2024-07-25 16:59:02.439544] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:10.138 [2024-07-25 16:59:02.439667] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:10.138 [2024-07-25 16:59:02.550405] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:10.138 [2024-07-25 16:59:02.550513] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61574 ] 00:08:10.396 [2024-07-25 16:59:02.692957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.396 [2024-07-25 16:59:02.794668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.396 [2024-07-25 16:59:02.794732] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:10.396 [2024-07-25 16:59:02.794753] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:08:10.396 [2024-07-25 16:59:02.794889] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 46784) 00:08:10.396 [2024-07-25 16:59:02.794940] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:11.331 [2024-07-25 16:59:03.793336] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:11.331 [2024-07-25 16:59:03.793495] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:11.589 [2024-07-25 16:59:03.900550] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:11.589 [2024-07-25 16:59:03.900629] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61593 ] 00:08:11.589 [2024-07-25 16:59:04.042848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.847 [2024-07-25 16:59:04.148457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.847 [2024-07-25 16:59:04.148514] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:11.847 [2024-07-25 16:59:04.148534] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:08:11.847 [2024-07-25 16:59:04.148741] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 46794) 00:08:11.847 [2024-07-25 16:59:04.148797] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:12.784 [2024-07-25 16:59:05.147191] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:12.784 [2024-07-25 16:59:05.147371] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:13.087 killing process with pid 61540 00:08:13.087 Testing SSL server path 00:08:13.087 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:08:13.087 [2024-07-25 16:59:05.348843] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:13.087 [2024-07-25 16:59:05.348924] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61637 ] 00:08:13.087 [2024-07-25 16:59:05.489875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.361 [2024-07-25 16:59:05.578435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.361 [2024-07-25 16:59:05.578518] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:13.361 [2024-07-25 16:59:05.578607] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(ssl) 00:08:13.619 [2024-07-25 16:59:05.865259] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:13.619 [2024-07-25 16:59:05.865345] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61647 ] 00:08:13.619 [2024-07-25 16:59:06.006545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.876 [2024-07-25 16:59:06.109957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.876 [2024-07-25 16:59:06.110208] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:13.876 [2024-07-25 16:59:06.110374] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:08:13.876 [2024-07-25 16:59:06.112402] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 39896) 00:08:13.876 [2024-07-25 16:59:06.113042] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 39896) to (10.0.0.1, 3260) 00:08:13.876 [2024-07-25 16:59:06.113765] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:14.808 [2024-07-25 16:59:07.112279] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:14.808 [2024-07-25 16:59:07.112668] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:14.808 [2024-07-25 16:59:07.112768] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:14.808 [2024-07-25 16:59:07.226122] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:14.809 [2024-07-25 16:59:07.226210] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61664 ] 00:08:15.067 [2024-07-25 16:59:07.367784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.067 [2024-07-25 16:59:07.462974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.067 [2024-07-25 16:59:07.463234] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:15.067 [2024-07-25 16:59:07.463381] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:08:15.067 [2024-07-25 16:59:07.464528] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 34614) to (10.0.0.1, 3260) 00:08:15.067 [2024-07-25 16:59:07.465364] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 34614) 00:08:15.067 [2024-07-25 16:59:07.466295] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:16.003 [2024-07-25 16:59:08.464811] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:16.003 [2024-07-25 16:59:08.465131] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:16.003 [2024-07-25 16:59:08.465222] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:16.262 [2024-07-25 16:59:08.583180] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:16.262 [2024-07-25 16:59:08.583484] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61686 ] 00:08:16.262 [2024-07-25 16:59:08.726978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.520 [2024-07-25 16:59:08.823542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.520 [2024-07-25 16:59:08.823783] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:16.520 [2024-07-25 16:59:08.823930] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:08:16.520 [2024-07-25 16:59:08.825596] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 34622) to (10.0.0.1, 3260) 00:08:16.520 [2024-07-25 16:59:08.826212] posix.c: 755:posix_sock_create_ssl_context: *ERROR*: Incorrect TLS version provided: 7 00:08:16.520 [2024-07-25 16:59:08.826373] posix.c:1033:posix_sock_create: *ERROR*: posix_sock_create_ssl_context() failed, errno = 2 00:08:16.520 [2024-07-25 16:59:08.826537] hello_sock.c: 309:hello_sock_connect: *ERROR*: connect error(2): No such file or directory 00:08:16.520 [2024-07-25 16:59:08.826591] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:16.520 [2024-07-25 16:59:08.826698] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:16.520 [2024-07-25 16:59:08.826871] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:08:16.520 [2024-07-25 16:59:08.826936] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:16.520 [2024-07-25 16:59:08.921043] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:16.520 [2024-07-25 16:59:08.921145] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61695 ] 00:08:16.778 [2024-07-25 16:59:09.065042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.778 [2024-07-25 16:59:09.172823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.778 [2024-07-25 16:59:09.173066] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:16.778 [2024-07-25 16:59:09.173214] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:08:16.778 [2024-07-25 16:59:09.174495] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 34638) to (10.0.0.1, 3260) 00:08:16.778 [2024-07-25 16:59:09.175737] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 34638) 00:08:16.778 [2024-07-25 16:59:09.176764] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:17.712 [2024-07-25 16:59:10.175285] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:17.712 [2024-07-25 16:59:10.175623] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:17.712 [2024-07-25 16:59:10.175835] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:18.038 SSL_connect:before SSL initialization 00:08:18.038 SSL_connect:SSLv3/TLS write client hello 00:08:18.038 [2024-07-25 16:59:10.308489] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 38460) to (10.0.0.1, 3260) 00:08:18.038 SSL_connect:SSLv3/TLS write client hello 00:08:18.038 SSL_connect:SSLv3/TLS read server hello 00:08:18.038 Can't use SSL_get_servername 00:08:18.038 SSL_connect:TLSv1.3 read encrypted extensions 00:08:18.038 SSL_connect:SSLv3/TLS read finished 00:08:18.038 SSL_connect:SSLv3/TLS write change cipher spec 00:08:18.038 SSL_connect:SSLv3/TLS write finished 00:08:18.038 SSL_connect:SSL negotiation finished successfully 00:08:18.038 SSL_connect:SSL negotiation finished successfully 00:08:18.038 SSL_connect:SSLv3/TLS read server session ticket 00:08:19.938 DONE 00:08:19.938 [2024-07-25 16:59:12.267714] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:19.938 SSL3 alert write:warning:close notify 00:08:19.938 [2024-07-25 16:59:12.308567] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:19.938 [2024-07-25 16:59:12.308646] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61741 ] 00:08:20.196 [2024-07-25 16:59:12.451388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.196 [2024-07-25 16:59:12.548493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.196 [2024-07-25 16:59:12.548745] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:20.196 [2024-07-25 16:59:12.548879] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:08:20.196 [2024-07-25 16:59:12.549978] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 34654) to (10.0.0.1, 3260) 00:08:20.196 [2024-07-25 16:59:12.552481] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 34654) 00:08:20.196 [2024-07-25 16:59:12.553138] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:20.196 [2024-07-25 16:59:12.553141] hello_sock.c: 240:hello_sock_writev_poll: *ERROR*: Write to socket failed. Closing connection... 00:08:20.196 [2024-07-25 16:59:12.553348] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:08:21.130 [2024-07-25 16:59:13.551715] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:21.130 [2024-07-25 16:59:13.552043] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:21.130 [2024-07-25 16:59:13.552133] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:08:21.130 [2024-07-25 16:59:13.552271] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:21.389 [2024-07-25 16:59:13.654471] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:21.389 [2024-07-25 16:59:13.654570] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61761 ] 00:08:21.389 [2024-07-25 16:59:13.795990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.647 [2024-07-25 16:59:13.883512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.647 [2024-07-25 16:59:13.883784] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:21.647 [2024-07-25 16:59:13.883885] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:08:21.647 [2024-07-25 16:59:13.885020] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 34660) to (10.0.0.1, 3260) 00:08:21.647 [2024-07-25 16:59:13.885876] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 34660) 00:08:21.647 [2024-07-25 16:59:13.886385] posix.c: 586:posix_sock_psk_find_session_server_cb: *ERROR*: Unknown Client's PSK ID 00:08:21.647 [2024-07-25 16:59:13.886427] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:21.647 [2024-07-25 16:59:13.886515] hello_sock.c: 240:hello_sock_writev_poll: *ERROR*: Write to socket failed. Closing connection... 00:08:21.647 [2024-07-25 16:59:13.886643] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:08:22.584 [2024-07-25 16:59:14.885012] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:22.584 [2024-07-25 16:59:14.885373] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:22.585 [2024-07-25 16:59:14.885459] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:08:22.585 [2024-07-25 16:59:14.885551] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:22.585 killing process with pid 61637 00:08:23.962 [2024-07-25 16:59:15.994922] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:23.962 [2024-07-25 16:59:15.995131] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:23.962 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:08:23.962 [2024-07-25 16:59:16.154382] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:23.962 [2024-07-25 16:59:16.154467] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61806 ] 00:08:23.962 [2024-07-25 16:59:16.298582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.963 [2024-07-25 16:59:16.394539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.963 [2024-07-25 16:59:16.394639] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:23.963 [2024-07-25 16:59:16.394713] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(posix) 00:08:24.221 [2024-07-25 16:59:16.654557] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 39340) to (10.0.0.1, 3260) 00:08:24.221 [2024-07-25 16:59:16.654687] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:24.488 killing process with pid 61806 00:08:25.422 [2024-07-25 16:59:17.690911] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:25.422 [2024-07-25 16:59:17.691064] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:25.422 00:08:25.422 real 0m17.313s 00:08:25.422 user 0m19.740s 00:08:25.422 sys 0m3.147s 00:08:25.422 16:59:17 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.422 16:59:17 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:08:25.422 ************************************ 00:08:25.422 END TEST iscsi_tgt_sock 00:08:25.422 ************************************ 00:08:25.422 16:59:17 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@26 -- # [[ -d /usr/local/calsoft ]] 00:08:25.422 16:59:17 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@27 -- # run_test iscsi_tgt_calsoft /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:08:25.422 16:59:17 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.422 16:59:17 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.422 16:59:17 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:08:25.681 ************************************ 00:08:25.681 START TEST iscsi_tgt_calsoft 00:08:25.681 ************************************ 00:08:25.681 16:59:17 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:08:25.681 * Looking for test storage... 00:08:25.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:25.681 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@15 -- # MALLOC_BDEV_SIZE=64 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@16 -- # MALLOC_BLOCK_SIZE=512 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@18 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@19 -- # calsoft_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@22 -- # mkdir -p /usr/local/etc 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@23 -- # cp /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/its.conf /usr/local/etc/ 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@26 -- # echo IP=10.0.0.1 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@28 -- # timing_enter start_iscsi_tgt 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@30 -- # iscsitestinit 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@33 -- # pid=61898 00:08:25.682 Process pid: 61898 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@34 -- # echo 'Process pid: 61898' 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@32 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x1 --wait-for-rpc 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@36 -- # trap 'killprocess $pid; delete_tmp_conf_files; iscsitestfini; exit 1 ' SIGINT SIGTERM EXIT 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@38 -- # waitforlisten 61898 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@831 -- # '[' -z 61898 ']' 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.682 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:08:25.682 [2024-07-25 16:59:18.125810] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:25.682 [2024-07-25 16:59:18.125883] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61898 ] 00:08:25.941 [2024-07-25 16:59:18.253058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.941 [2024-07-25 16:59:18.367231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.506 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.506 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@864 -- # return 0 00:08:26.506 16:59:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:08:26.764 16:59:19 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:08:27.331 iscsi_tgt is listening. Running tests... 00:08:27.331 16:59:19 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@41 -- # echo 'iscsi_tgt is listening. Running tests...' 00:08:27.331 16:59:19 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@43 -- # timing_exit start_iscsi_tgt 00:08:27.331 16:59:19 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:27.331 16:59:19 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:08:27.331 16:59:19 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_auth_group 1 -c 'user:root secret:tester' 00:08:27.331 16:59:19 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_discovery_auth -g 1 00:08:27.589 16:59:19 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:08:27.873 16:59:20 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:08:27.873 16:59:20 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create -b MyBdev 64 512 00:08:28.131 MyBdev 00:08:28.131 16:59:20 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias MyBdev:0 1:2 64 -g 1 00:08:28.388 16:59:20 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@55 -- # sleep 1 00:08:29.759 16:59:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@57 -- # '[' '' ']' 00:08:29.759 16:59:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py /home/vagrant/spdk_repo/spdk/../output 00:08:29.759 [2024-07-25 16:59:21.928576] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:29.759 [2024-07-25 16:59:21.928689] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:29.759 [2024-07-25 16:59:21.993887] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:08:29.759 [2024-07-25 16:59:22.016765] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:29.759 [2024-07-25 16:59:22.035723] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:08:29.759 PDU 00:08:29.759 00000000 01 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:08:29.759 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:08:29.759 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:08:29.759 [2024-07-25 16:59:22.035769] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:08:29.759 [2024-07-25 16:59:22.055855] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:29.759 [2024-07-25 16:59:22.076781] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:29.759 [2024-07-25 16:59:22.076916] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:29.759 [2024-07-25 16:59:22.117843] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:29.759 [2024-07-25 16:59:22.137727] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:29.759 [2024-07-25 16:59:22.137827] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:29.759 [2024-07-25 16:59:22.157544] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:29.759 [2024-07-25 16:59:22.157640] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:29.759 [2024-07-25 16:59:22.215954] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:29.759 [2024-07-25 16:59:22.216064] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:30.328 [2024-07-25 16:59:22.505140] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:08:30.328 [2024-07-25 16:59:22.565933] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:30.328 [2024-07-25 16:59:22.566046] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:30.328 [2024-07-25 16:59:22.587073] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:08:30.328 [2024-07-25 16:59:22.673519] iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:08:30.328 [2024-07-25 16:59:22.674037] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:30.328 [2024-07-25 16:59:22.694546] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:30.328 [2024-07-25 16:59:22.715274] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(3) error ExpCmdSN=4 00:08:30.328 [2024-07-25 16:59:22.715424] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:30.328 [2024-07-25 16:59:22.736297] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:30.328 [2024-07-25 16:59:22.736333] iscsi.c:3961:iscsi_handle_recovery_datain: *ERROR*: Initiator requests BegRun: 0x00000000, RunLength:0x00001000 greater than maximum DataSN: 0x00000004. 00:08:30.328 [2024-07-25 16:59:22.736343] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=10) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:08:30.328 [2024-07-25 16:59:22.736354] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:08:30.328 [2024-07-25 16:59:22.757159] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:30.328 [2024-07-25 16:59:22.757328] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:30.328 [2024-07-25 16:59:22.777021] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:30.587 [2024-07-25 16:59:22.837513] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:08:30.587 PDU 00:08:30.587 00000000 00 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:08:30.587 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:08:30.587 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:08:30.587 [2024-07-25 16:59:22.837565] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:08:30.587 [2024-07-25 16:59:22.856383] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:30.587 [2024-07-25 16:59:22.875550] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:30.587 [2024-07-25 16:59:22.875907] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:08:30.587 [2024-07-25 16:59:22.876179] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:08:30.587 [2024-07-25 16:59:22.915069] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:08:30.587 [2024-07-25 16:59:22.937673] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:08:30.587 [2024-07-25 16:59:22.958748] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:08:30.587 [2024-07-25 16:59:22.980717] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:30.587 [2024-07-25 16:59:23.000844] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:08:30.846 [2024-07-25 16:59:23.123314] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:32.748 [2024-07-25 16:59:25.082685] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:32.748 [2024-07-25 16:59:25.104760] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:32.748 [2024-07-25 16:59:25.104858] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:32.748 [2024-07-25 16:59:25.162905] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:32.748 [2024-07-25 16:59:25.162987] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:32.748 [2024-07-25 16:59:25.183546] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(341) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:08:32.748 [2024-07-25 16:59:25.183658] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(8) ignore (ExpCmdSN=9, MaxCmdSN=71) 00:08:32.748 [2024-07-25 16:59:25.184462] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:08:33.006 [2024-07-25 16:59:25.225030] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:33.006 [2024-07-25 16:59:25.225152] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:33.006 [2024-07-25 16:59:25.264554] iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:33.006 [2024-07-25 16:59:25.264585] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:08:33.006 [2024-07-25 16:59:25.264595] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:08:33.006 [2024-07-25 16:59:25.303248] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:33.006 [2024-07-25 16:59:25.322928] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:33.006 [2024-07-25 16:59:25.323049] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:33.006 [2024-07-25 16:59:25.342627] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=ffffffff 00:08:33.006 [2024-07-25 16:59:25.363074] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:33.006 [2024-07-25 16:59:25.383849] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:33.006 [2024-07-25 16:59:25.423063] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:08:33.006 [2024-07-25 16:59:25.423612] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:33.006 [2024-07-25 16:59:25.424020] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:08:33.006 [2024-07-25 16:59:25.424505] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=6, MaxCmdSN=67) 00:08:33.006 [2024-07-25 16:59:25.425565] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:08:33.006 [2024-07-25 16:59:25.445861] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:33.006 [2024-07-25 16:59:25.446067] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:33.264 [2024-07-25 16:59:25.501746] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:08:33.264 [2024-07-25 16:59:25.522033] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:33.264 [2024-07-25 16:59:25.522246] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:33.264 [2024-07-25 16:59:25.540650] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:33.264 [2024-07-25 16:59:25.540910] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:33.264 [2024-07-25 16:59:25.563222] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:33.264 [2024-07-25 16:59:25.563508] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:33.264 [2024-07-25 16:59:25.603171] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:08:33.264 [2024-07-25 16:59:25.700369] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:33.264 [2024-07-25 16:59:25.723814] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:08:33.522 [2024-07-25 16:59:25.762617] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:33.522 [2024-07-25 16:59:25.783548] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:08:33.522 [2024-07-25 16:59:25.803006] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:33.522 [2024-07-25 16:59:25.803202] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:33.522 [2024-07-25 16:59:25.822783] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:33.522 [2024-07-25 16:59:25.822980] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:33.522 [2024-07-25 16:59:25.843212] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:33.522 [2024-07-25 16:59:25.863220] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:33.522 [2024-07-25 16:59:25.863442] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:33.522 [2024-07-25 16:59:25.883942] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:33.522 [2024-07-25 16:59:25.987008] iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 2745410467, and the dataout task tag is 2728567458 00:08:33.522 [2024-07-25 16:59:25.987121] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:08:33.522 [2024-07-25 16:59:25.987379] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:08:33.522 [2024-07-25 16:59:25.987441] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:08:33.780 [2024-07-25 16:59:26.005887] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:33.780 [2024-07-25 16:59:26.005984] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:33.780 [2024-07-25 16:59:26.024670] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:33.780 [2024-07-25 16:59:26.024794] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:33.780 [2024-07-25 16:59:26.065162] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:33.780 [2024-07-25 16:59:26.065469] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:33.780 [2024-07-25 16:59:26.137133] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:33.780 [2024-07-25 16:59:26.168789] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:33.780 [2024-07-25 16:59:26.168888] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:33.780 [2024-07-25 16:59:26.226984] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:08:33.780 [2024-07-25 16:59:26.240683] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:34.038 [2024-07-25 16:59:26.260496] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:34.038 [2024-07-25 16:59:26.372408] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(2) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:08:34.038 [2024-07-25 16:59:26.372477] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:34.038 [2024-07-25 16:59:26.372522] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:34.038 [2024-07-25 16:59:26.392121] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:34.038 [2024-07-25 16:59:26.392222] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:34.038 [2024-07-25 16:59:26.430767] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:34.038 [2024-07-25 16:59:26.430896] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:34.038 [2024-07-25 16:59:26.469613] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:34.038 [2024-07-25 16:59:26.488629] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:34.038 [2024-07-25 16:59:26.488742] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:34.297 [2024-07-25 16:59:26.523665] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:34.297 [2024-07-25 16:59:26.587046] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:34.297 [2024-07-25 16:59:26.587155] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:34.297 [2024-07-25 16:59:26.619860] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:34.297 [2024-07-25 16:59:26.676623] param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 276 00:08:34.297 [2024-07-25 16:59:26.676656] iscsi.c:1303:iscsi_op_login_store_incoming_params: *ERROR*: iscsi_parse_params() failed 00:08:34.297 [2024-07-25 16:59:26.718338] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:34.297 [2024-07-25 16:59:26.718440] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:34.297 [2024-07-25 16:59:26.743869] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:34.555 [2024-07-25 16:59:26.766274] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:34.555 [2024-07-25 16:59:26.825757] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:34.555 [2024-07-25 16:59:26.912513] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key ImmediateDataa 00:08:34.555 [2024-07-25 16:59:27.002491] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:34.555 [2024-07-25 16:59:27.002599] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:34.813 [2024-07-25 16:59:27.075493] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:08:34.813 [2024-07-25 16:59:27.075596] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:08:34.813 [2024-07-25 16:59:27.095183] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:34.813 [2024-07-25 16:59:27.133468] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:34.813 [2024-07-25 16:59:27.133572] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:34.813 [2024-07-25 16:59:27.151843] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:34.813 [2024-07-25 16:59:27.192689] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:34.813 [2024-07-25 16:59:27.210329] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:08:34.813 [2024-07-25 16:59:27.250251] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:34.813 [2024-07-25 16:59:27.250372] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:34.813 [2024-07-25 16:59:27.270563] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:35.073 [2024-07-25 16:59:27.289404] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:35.073 [2024-07-25 16:59:27.289498] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:35.073 [2024-07-25 16:59:27.309095] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:35.073 [2024-07-25 16:59:27.309988] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:35.073 [2024-07-25 16:59:27.406856] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:35.073 [2024-07-25 16:59:27.480109] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:08:35.073 [2024-07-25 16:59:27.496848] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:35.073 [2024-07-25 16:59:27.496933] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:35.073 [2024-07-25 16:59:27.515849] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:35.073 [2024-07-25 16:59:27.536061] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:36.450 [2024-07-25 16:59:28.593528] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:37.384 [2024-07-25 16:59:29.573912] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=6, MaxCmdSN=68) 00:08:37.384 [2024-07-25 16:59:29.574581] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=7 00:08:37.384 [2024-07-25 16:59:29.593820] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=5, MaxCmdSN=68) 00:08:38.321 [2024-07-25 16:59:30.594201] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=6, MaxCmdSN=69) 00:08:38.321 [2024-07-25 16:59:30.594353] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=7, MaxCmdSN=70) 00:08:38.321 [2024-07-25 16:59:30.594368] iscsi.c:4028:iscsi_handle_status_snack: *ERROR*: Unable to find StatSN: 0x00000007. For a StatusSNACK, assuming this is a proactive SNACK for an untransmitted StatSN, ignoring. 00:08:38.321 [2024-07-25 16:59:30.594382] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=8 00:08:50.641 [2024-07-25 16:59:42.639817] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:08:50.641 [2024-07-25 16:59:42.661045] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:08:50.641 [2024-07-25 16:59:42.680423] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:08:50.641 [2024-07-25 16:59:42.681686] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:08:50.641 [2024-07-25 16:59:42.701392] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:08:50.641 [2024-07-25 16:59:42.722350] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:08:50.641 [2024-07-25 16:59:42.745633] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:08:50.641 [2024-07-25 16:59:42.786358] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:08:50.641 [2024-07-25 16:59:42.787689] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=64 00:08:50.641 [2024-07-25 16:59:42.808157] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1107296256) error ExpCmdSN=66 00:08:50.641 [2024-07-25 16:59:42.829338] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:08:50.641 [2024-07-25 16:59:42.849440] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=67 00:08:50.641 Skipping tc_ffp_15_2. It is known to fail. 00:08:50.641 Skipping tc_ffp_29_2. It is known to fail. 00:08:50.641 Skipping tc_ffp_29_3. It is known to fail. 00:08:50.641 Skipping tc_ffp_29_4. It is known to fail. 00:08:50.641 Skipping tc_err_1_1. It is known to fail. 00:08:50.641 Skipping tc_err_1_2. It is known to fail. 00:08:50.641 Skipping tc_err_2_8. It is known to fail. 00:08:50.641 Skipping tc_err_3_1. It is known to fail. 00:08:50.641 Skipping tc_err_3_2. It is known to fail. 00:08:50.641 Skipping tc_err_3_3. It is known to fail. 00:08:50.641 Skipping tc_err_3_4. It is known to fail. 00:08:50.641 Skipping tc_err_5_1. It is known to fail. 00:08:50.641 Skipping tc_login_3_1. It is known to fail. 00:08:50.641 Skipping tc_login_11_2. It is known to fail. 00:08:50.641 Skipping tc_login_11_4. It is known to fail. 00:08:50.641 Skipping tc_login_2_2. It is known to fail. 00:08:50.641 Skipping tc_login_29_1. It is known to fail. 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@62 -- # failed=0 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@67 -- # iscsicleanup 00:08:50.641 Cleaning up iSCSI connection 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:08:50.641 iscsiadm: No matching sessions found 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@983 -- # true 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:08:50.641 iscsiadm: No records found 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@984 -- # true 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@985 -- # rm -rf 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@68 -- # killprocess 61898 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@950 -- # '[' -z 61898 ']' 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # kill -0 61898 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@955 -- # uname 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61898 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.641 killing process with pid 61898 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61898' 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@969 -- # kill 61898 00:08:50.641 16:59:42 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@974 -- # wait 61898 00:08:50.954 16:59:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@69 -- # delete_tmp_conf_files 00:08:50.954 16:59:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@12 -- # rm -f /usr/local/etc/its.conf 00:08:50.954 16:59:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@70 -- # iscsitestfini 00:08:50.954 16:59:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:08:50.954 16:59:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@71 -- # exit 0 00:08:50.954 00:08:50.954 real 0m25.420s 00:08:50.954 user 0m40.863s 00:08:50.954 sys 0m3.108s 00:08:50.954 16:59:43 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.954 16:59:43 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:08:50.955 ************************************ 00:08:50.955 END TEST iscsi_tgt_calsoft 00:08:50.955 ************************************ 00:08:50.955 16:59:43 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@31 -- # run_test iscsi_tgt_filesystem /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:08:50.955 16:59:43 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:50.955 16:59:43 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.955 16:59:43 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:08:50.955 ************************************ 00:08:50.955 START TEST iscsi_tgt_filesystem 00:08:50.955 ************************************ 00:08:50.955 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:08:51.216 * Looking for test storage... 00:08:51.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/setup/common.sh 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=y 00:08:51.216 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:51.217 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:51.218 #define SPDK_CONFIG_H 00:08:51.218 #define SPDK_CONFIG_APPS 1 00:08:51.218 #define SPDK_CONFIG_ARCH native 00:08:51.218 #undef SPDK_CONFIG_ASAN 00:08:51.218 #undef SPDK_CONFIG_AVAHI 00:08:51.218 #undef SPDK_CONFIG_CET 00:08:51.218 #define SPDK_CONFIG_COVERAGE 1 00:08:51.218 #define SPDK_CONFIG_CROSS_PREFIX 00:08:51.218 #undef SPDK_CONFIG_CRYPTO 00:08:51.218 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:51.218 #undef SPDK_CONFIG_CUSTOMOCF 00:08:51.218 #undef SPDK_CONFIG_DAOS 00:08:51.218 #define SPDK_CONFIG_DAOS_DIR 00:08:51.218 #define SPDK_CONFIG_DEBUG 1 00:08:51.218 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:51.218 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:51.218 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:51.218 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:51.218 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:51.218 #undef SPDK_CONFIG_DPDK_UADK 00:08:51.218 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:51.218 #define SPDK_CONFIG_EXAMPLES 1 00:08:51.218 #undef SPDK_CONFIG_FC 00:08:51.218 #define SPDK_CONFIG_FC_PATH 00:08:51.218 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:51.218 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:51.218 #undef SPDK_CONFIG_FUSE 00:08:51.218 #undef SPDK_CONFIG_FUZZER 00:08:51.218 #define SPDK_CONFIG_FUZZER_LIB 00:08:51.218 #undef SPDK_CONFIG_GOLANG 00:08:51.218 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:51.218 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:51.218 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:51.218 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:51.218 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:51.218 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:51.218 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:51.218 #define SPDK_CONFIG_IDXD 1 00:08:51.218 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:51.218 #undef SPDK_CONFIG_IPSEC_MB 00:08:51.218 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:51.218 #define SPDK_CONFIG_ISAL 1 00:08:51.218 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:51.218 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:51.218 #define SPDK_CONFIG_LIBDIR 00:08:51.218 #undef SPDK_CONFIG_LTO 00:08:51.218 #define SPDK_CONFIG_MAX_LCORES 128 00:08:51.218 #define SPDK_CONFIG_NVME_CUSE 1 00:08:51.218 #undef SPDK_CONFIG_OCF 00:08:51.218 #define SPDK_CONFIG_OCF_PATH 00:08:51.218 #define SPDK_CONFIG_OPENSSL_PATH 00:08:51.218 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:51.218 #define SPDK_CONFIG_PGO_DIR 00:08:51.218 #undef SPDK_CONFIG_PGO_USE 00:08:51.218 #define SPDK_CONFIG_PREFIX /usr/local 00:08:51.218 #undef SPDK_CONFIG_RAID5F 00:08:51.218 #define SPDK_CONFIG_RBD 1 00:08:51.218 #define SPDK_CONFIG_RDMA 1 00:08:51.218 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:51.218 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:51.218 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:51.218 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:51.218 #define SPDK_CONFIG_SHARED 1 00:08:51.218 #undef SPDK_CONFIG_SMA 00:08:51.218 #define SPDK_CONFIG_TESTS 1 00:08:51.218 #undef SPDK_CONFIG_TSAN 00:08:51.218 #define SPDK_CONFIG_UBLK 1 00:08:51.218 #define SPDK_CONFIG_UBSAN 1 00:08:51.218 #undef SPDK_CONFIG_UNIT_TESTS 00:08:51.218 #undef SPDK_CONFIG_URING 00:08:51.218 #define SPDK_CONFIG_URING_PATH 00:08:51.218 #undef SPDK_CONFIG_URING_ZNS 00:08:51.218 #undef SPDK_CONFIG_USDT 00:08:51.218 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:51.218 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:51.218 #undef SPDK_CONFIG_VFIO_USER 00:08:51.218 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:51.218 #define SPDK_CONFIG_VHOST 1 00:08:51.218 #define SPDK_CONFIG_VIRTIO 1 00:08:51.218 #undef SPDK_CONFIG_VTUNE 00:08:51.218 #define SPDK_CONFIG_VTUNE_DIR 00:08:51.218 #define SPDK_CONFIG_WERROR 1 00:08:51.218 #define SPDK_CONFIG_WPDK_DIR 00:08:51.218 #undef SPDK_CONFIG_XNVME 00:08:51.218 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # uname -s 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@58 -- # : 0 00:08:51.218 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@70 -- # : 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@76 -- # : 1 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@78 -- # : 1 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@86 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@92 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@94 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@104 -- # : 1 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@124 -- # : 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@138 -- # : 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:51.219 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@154 -- # : 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@166 -- # : 0 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@169 -- # : 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@173 -- # : 0 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@202 -- # cat 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j10 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@320 -- # [[ -z 62615 ]] 00:08:51.220 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@320 -- # kill -0 62615 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.AKfCgr 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem /tmp/spdk.AKfCgr/tests/filesystem /tmp/spdk.AKfCgr 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@329 -- # df -T 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=devtmpfs 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=4194304 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=4194304 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6264512512 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6267887616 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=3375104 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=2496167936 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=2507157504 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10989568 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda5 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=btrfs 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=13787447296 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=20314062848 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5240897536 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda5 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=btrfs 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=13787447296 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=20314062848 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5240897536 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6267748352 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6267891712 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=143360 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda2 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext4 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=843546624 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=1012768768 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=100016128 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda3 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=vfat 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=92499968 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=104607744 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=12107776 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=1253572608 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=1253576704 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest_2/fedora38-libvirt/output 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=fuse.sshfs 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=92985610240 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=105088212992 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6717169664 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:08:51.221 * Looking for test storage... 00:08:51.221 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:08:51.222 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:08:51.222 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@374 -- # df /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:51.222 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@374 -- # mount=/home 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@376 -- # target_space=13787447296 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@382 -- # [[ btrfs == tmpfs ]] 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@382 -- # [[ btrfs == ramfs ]] 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@382 -- # [[ /home == / ]] 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:51.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@391 -- # return 0 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@11 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.481 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@13 -- # iscsitestinit 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@29 -- # timing_enter start_iscsi_tgt 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@32 -- # pid=62652 00:08:51.482 Process pid: 62652 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@33 -- # echo 'Process pid: 62652' 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@35 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@37 -- # waitforlisten 62652 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@831 -- # '[' -z 62652 ']' 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.482 16:59:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.482 [2024-07-25 16:59:43.774920] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:51.482 [2024-07-25 16:59:43.774994] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62652 ] 00:08:51.482 [2024-07-25 16:59:43.916532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.741 [2024-07-25 16:59:44.010443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.741 [2024-07-25 16:59:44.010567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.741 [2024-07-25 16:59:44.010751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.741 [2024-07-25 16:59:44.010752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.310 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.310 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@864 -- # return 0 00:08:52.310 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@38 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:08:52.310 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.310 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.310 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.310 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@39 -- # rpc_cmd framework_start_init 00:08:52.310 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.310 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@40 -- # echo 'iscsi_tgt is listening. Running tests...' 00:08:52.569 iscsi_tgt is listening. Running tests... 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@42 -- # timing_exit start_iscsi_tgt 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # get_first_nvme_bdf 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1524 -- # bdfs=() 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1524 -- # local bdfs 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # local bdfs 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # bdf=0000:00:10.0 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@45 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@46 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@47 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:00:10.0 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.569 16:59:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.828 Nvme0n1 00:08:52.828 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.828 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # rpc_cmd bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:08:52.828 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.828 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.828 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.828 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # ls_guid=93c503d8-2866-425c-937e-7e3300ca049c 00:08:52.828 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # get_lvs_free_mb 93c503d8-2866-425c-937e-7e3300ca049c 00:08:52.828 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1364 -- # local lvs_uuid=93c503d8-2866-425c-937e-7e3300ca049c 00:08:52.828 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1365 -- # local lvs_info 00:08:52.828 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1366 -- # local fc 00:08:52.828 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1367 -- # local cs 00:08:52.828 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_lvol_get_lvstores 00:08:52.828 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.828 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.828 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.828 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:08:52.828 { 00:08:52.828 "uuid": "93c503d8-2866-425c-937e-7e3300ca049c", 00:08:52.828 "name": "lvs_0", 00:08:52.829 "base_bdev": "Nvme0n1", 00:08:52.829 "total_data_clusters": 1278, 00:08:52.829 "free_clusters": 1278, 00:08:52.829 "block_size": 4096, 00:08:52.829 "cluster_size": 4194304 00:08:52.829 } 00:08:52.829 ]' 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="93c503d8-2866-425c-937e-7e3300ca049c") .free_clusters' 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1369 -- # fc=1278 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="93c503d8-2866-425c-937e-7e3300ca049c") .cluster_size' 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1370 -- # cs=4194304 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1373 -- # free_mb=5112 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1374 -- # echo 5112 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # free_mb=5112 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@52 -- # '[' 5112 -gt 2048 ']' 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@53 -- # rpc_cmd bdev_lvol_create -u 93c503d8-2866-425c-937e-7e3300ca049c lbd_0 2048 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.829 044a9879-b05f-4ecb-981c-7692821a3458 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@61 -- # lvol_name=lvs_0/lbd_0 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@62 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias lvs_0/lbd_0:0 1:2 256 -d 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.829 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@63 -- # sleep 1 00:08:53.766 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@65 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:08:54.025 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@66 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:08:54.025 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:54.025 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@67 -- # waitforiscsidevices 1 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@116 -- # local num=1 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:54.025 [2024-07-25 16:59:46.339515] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # n=1 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@123 -- # return 0 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # get_bdev_size lvs_0/lbd_0 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1378 -- # local bdev_name=lvs_0/lbd_0 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1380 -- # local bs 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1381 -- # local nb 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b lvs_0/lbd_0 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:54.025 { 00:08:54.025 "name": "044a9879-b05f-4ecb-981c-7692821a3458", 00:08:54.025 "aliases": [ 00:08:54.025 "lvs_0/lbd_0" 00:08:54.025 ], 00:08:54.025 "product_name": "Logical Volume", 00:08:54.025 "block_size": 4096, 00:08:54.025 "num_blocks": 524288, 00:08:54.025 "uuid": "044a9879-b05f-4ecb-981c-7692821a3458", 00:08:54.025 "assigned_rate_limits": { 00:08:54.025 "rw_ios_per_sec": 0, 00:08:54.025 "rw_mbytes_per_sec": 0, 00:08:54.025 "r_mbytes_per_sec": 0, 00:08:54.025 "w_mbytes_per_sec": 0 00:08:54.025 }, 00:08:54.025 "claimed": false, 00:08:54.025 "zoned": false, 00:08:54.025 "supported_io_types": { 00:08:54.025 "read": true, 00:08:54.025 "write": true, 00:08:54.025 "unmap": true, 00:08:54.025 "flush": false, 00:08:54.025 "reset": true, 00:08:54.025 "nvme_admin": false, 00:08:54.025 "nvme_io": false, 00:08:54.025 "nvme_io_md": false, 00:08:54.025 "write_zeroes": true, 00:08:54.025 "zcopy": false, 00:08:54.025 "get_zone_info": false, 00:08:54.025 "zone_management": false, 00:08:54.025 "zone_append": false, 00:08:54.025 "compare": false, 00:08:54.025 "compare_and_write": false, 00:08:54.025 "abort": false, 00:08:54.025 "seek_hole": true, 00:08:54.025 "seek_data": true, 00:08:54.025 "copy": false, 00:08:54.025 "nvme_iov_md": false 00:08:54.025 }, 00:08:54.025 "driver_specific": { 00:08:54.025 "lvol": { 00:08:54.025 "lvol_store_uuid": "93c503d8-2866-425c-937e-7e3300ca049c", 00:08:54.025 "base_bdev": "Nvme0n1", 00:08:54.025 "thin_provision": false, 00:08:54.025 "num_allocated_clusters": 512, 00:08:54.025 "snapshot": false, 00:08:54.025 "clone": false, 00:08:54.025 "esnap_clone": false 00:08:54.025 } 00:08:54.025 } 00:08:54.025 } 00:08:54.025 ]' 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1383 -- # bs=4096 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1384 -- # nb=524288 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1387 -- # bdev_size=2048 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1388 -- # echo 2048 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # lvol_size=2147483648 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@70 -- # trap 'iscsicleanup; remove_backends; umount /mnt/device; rm -rf /mnt/device; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@72 -- # mkdir -p /mnt/device 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # iscsiadm -m session -P 3 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # grep 'Attached scsi disk' 00:08:54.025 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # awk '{print $4}' 00:08:54.285 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # dev=sda 00:08:54.285 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@76 -- # waitforfile /dev/sda 00:08:54.285 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1265 -- # local i=0 00:08:54.285 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:08:54.285 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:08:54.285 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1276 -- # return 0 00:08:54.285 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # sec_size_to_bytes sda 00:08:54.285 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@76 -- # local dev=sda 00:08:54.285 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@78 -- # [[ -e /sys/block/sda ]] 00:08:54.285 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@80 -- # echo 2147483648 00:08:54.285 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # dev_size=2147483648 00:08:54.285 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@80 -- # (( lvol_size == dev_size )) 00:08:54.285 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@81 -- # parted -s /dev/sda mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:54.285 [2024-07-25 16:59:46.529426] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:54.285 16:59:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@82 -- # sleep 1 00:08:55.221 16:59:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@144 -- # run_test iscsi_tgt_filesystem_ext4 filesystem_test ext4 00:08:55.221 16:59:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:55.221 16:59:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.221 16:59:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.221 ************************************ 00:08:55.221 START TEST iscsi_tgt_filesystem_ext4 00:08:55.221 ************************************ 00:08:55.221 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1125 -- # filesystem_test ext4 00:08:55.221 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@89 -- # fstype=ext4 00:08:55.222 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@91 -- # make_filesystem ext4 /dev/sda1 00:08:55.222 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:08:55.222 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/sda1 00:08:55.222 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:08:55.222 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:08:55.222 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:08:55.222 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:08:55.222 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda1 00:08:55.222 mke2fs 1.46.5 (30-Dec-2021) 00:08:55.222 Discarding device blocks: 0/522240 done 00:08:55.222 Creating filesystem with 522240 4k blocks and 130560 inodes 00:08:55.222 Filesystem UUID: 0f8070f8-4541-4749-bcbf-ad20c1ef1d05 00:08:55.222 Superblock backups stored on blocks: 00:08:55.222 32768, 98304, 163840, 229376, 294912 00:08:55.222 00:08:55.222 Allocating group tables: 0/16 done 00:08:55.222 Writing inode tables: 0/16 done 00:08:55.483 Creating journal (8192 blocks): done 00:08:55.483 Writing superblocks and filesystem accounting information: 0/16 done 00:08:55.483 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@93 -- # '[' 0 -eq 1 ']' 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@119 -- # touch /mnt/device/aaa 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@120 -- # umount /mnt/device 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@122 -- # iscsiadm -m node --logout 00:08:55.483 Logging out of session [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:55.483 Logout of [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@123 -- # waitforiscsidevices 0 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=0 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:55.483 iscsiadm: No active sessions. 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # true 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=0 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@124 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:08:55.483 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:55.483 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@125 -- # waitforiscsidevices 1 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=1 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:55.483 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:55.483 [2024-07-25 16:59:47.941062] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=1 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@127 -- # iscsiadm -m session -P 3 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@127 -- # awk '{print $4}' 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@127 -- # grep 'Attached scsi disk' 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@127 -- # dev=sda 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@129 -- # waitforfile /dev/sda1 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1265 -- # local i=0 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1276 -- # return 0 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@130 -- # mount -o rw /dev/sda1 /mnt/device 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@132 -- # '[' -f /mnt/device/aaa ']' 00:08:55.742 File existed. 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@133 -- # echo 'File existed.' 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@139 -- # rm -rf /mnt/device/aaa 00:08:55.742 16:59:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@140 -- # umount /mnt/device 00:08:55.742 00:08:55.742 real 0m0.489s 00:08:55.742 user 0m0.050s 00:08:55.742 sys 0m0.109s 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:55.742 ************************************ 00:08:55.742 END TEST iscsi_tgt_filesystem_ext4 00:08:55.742 ************************************ 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@145 -- # run_test iscsi_tgt_filesystem_btrfs filesystem_test btrfs 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.742 ************************************ 00:08:55.742 START TEST iscsi_tgt_filesystem_btrfs 00:08:55.742 ************************************ 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1125 -- # filesystem_test btrfs 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@89 -- # fstype=btrfs 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@91 -- # make_filesystem btrfs /dev/sda1 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/sda1 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/sda1 00:08:55.742 btrfs-progs v6.6.2 00:08:55.742 See https://btrfs.readthedocs.io for more information. 00:08:55.742 00:08:55.742 Performing full device TRIM /dev/sda1 (1.99GiB) ... 00:08:55.742 NOTE: several default settings have changed in version 5.15, please make sure 00:08:55.742 this does not affect your deployments: 00:08:55.742 - DUP for metadata (-m dup) 00:08:55.742 - enabled no-holes (-O no-holes) 00:08:55.742 - enabled free-space-tree (-R free-space-tree) 00:08:55.742 00:08:55.742 Label: (null) 00:08:55.742 UUID: 1600a873-5161-4447-9db3-473c1bf16436 00:08:55.742 Node size: 16384 00:08:55.742 Sector size: 4096 00:08:55.742 Filesystem size: 1.99GiB 00:08:55.742 Block group profiles: 00:08:55.742 Data: single 8.00MiB 00:08:55.742 Metadata: DUP 102.00MiB 00:08:55.742 System: DUP 8.00MiB 00:08:55.742 SSD detected: yes 00:08:55.742 Zoned device: no 00:08:55.742 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:55.742 Runtime features: free-space-tree 00:08:55.742 Checksum: crc32c 00:08:55.742 Number of devices: 1 00:08:55.742 Devices: 00:08:55.742 ID SIZE PATH 00:08:55.742 1 1.99GiB /dev/sda1 00:08:55.742 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:08:55.742 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@93 -- # '[' 0 -eq 1 ']' 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@119 -- # touch /mnt/device/aaa 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@120 -- # umount /mnt/device 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@122 -- # iscsiadm -m node --logout 00:08:56.000 Logging out of session [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:56.000 Logout of [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@123 -- # waitforiscsidevices 0 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:56.000 iscsiadm: No active sessions. 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # true 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=0 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@124 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:08:56.000 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:56.000 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@125 -- # waitforiscsidevices 1 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:56.000 [2024-07-25 16:59:48.333033] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=1 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@127 -- # iscsiadm -m session -P 3 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@127 -- # grep 'Attached scsi disk' 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@127 -- # awk '{print $4}' 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@127 -- # dev=sda 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@129 -- # waitforfile /dev/sda1 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1265 -- # local i=0 00:08:56.000 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1276 -- # return 0 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@130 -- # mount -o rw /dev/sda1 /mnt/device 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@132 -- # '[' -f /mnt/device/aaa ']' 00:08:56.001 File existed. 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@133 -- # echo 'File existed.' 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@139 -- # rm -rf /mnt/device/aaa 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@140 -- # umount /mnt/device 00:08:56.001 00:08:56.001 real 0m0.299s 00:08:56.001 user 0m0.032s 00:08:56.001 sys 0m0.095s 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:56.001 ************************************ 00:08:56.001 END TEST iscsi_tgt_filesystem_btrfs 00:08:56.001 ************************************ 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@146 -- # run_test iscsi_tgt_filesystem_xfs filesystem_test xfs 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.001 ************************************ 00:08:56.001 START TEST iscsi_tgt_filesystem_xfs 00:08:56.001 ************************************ 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1125 -- # filesystem_test xfs 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@89 -- # fstype=xfs 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@91 -- # make_filesystem xfs /dev/sda1 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/sda1 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:08:56.001 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/sda1 00:08:56.258 meta-data=/dev/sda1 isize=512 agcount=4, agsize=130560 blks 00:08:56.258 = sectsz=4096 attr=2, projid32bit=1 00:08:56.258 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:56.258 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:56.258 data = bsize=4096 blocks=522240, imaxpct=25 00:08:56.258 = sunit=0 swidth=0 blks 00:08:56.258 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:56.258 log =internal log bsize=4096 blocks=16384, version=2 00:08:56.258 = sectsz=4096 sunit=1 blks, lazy-count=1 00:08:56.258 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:56.514 Discarding blocks...Done. 00:08:56.514 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:08:56.514 16:59:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@93 -- # '[' 0 -eq 1 ']' 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@119 -- # touch /mnt/device/aaa 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@120 -- # umount /mnt/device 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@122 -- # iscsiadm -m node --logout 00:08:57.450 Logging out of session [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:57.450 Logout of [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@123 -- # waitforiscsidevices 0 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:57.450 iscsiadm: No active sessions. 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # true 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=0 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@124 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:08:57.450 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:57.450 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@125 -- # waitforiscsidevices 1 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # true 00:08:57.450 [2024-07-25 16:59:49.734663] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=0 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 1 ']' 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@121 -- # sleep 0.1 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i++ )) 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=1 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@127 -- # grep 'Attached scsi disk' 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@127 -- # iscsiadm -m session -P 3 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@127 -- # awk '{print $4}' 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@127 -- # dev=sda 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@129 -- # waitforfile /dev/sda1 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1265 -- # local i=0 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1276 -- # return 0 00:08:57.450 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@130 -- # mount -o rw /dev/sda1 /mnt/device 00:08:57.708 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@132 -- # '[' -f /mnt/device/aaa ']' 00:08:57.708 File existed. 00:08:57.708 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@133 -- # echo 'File existed.' 00:08:57.708 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@139 -- # rm -rf /mnt/device/aaa 00:08:57.708 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@140 -- # umount /mnt/device 00:08:57.708 00:08:57.708 real 0m1.557s 00:08:57.708 user 0m0.063s 00:08:57.708 sys 0m0.140s 00:08:57.708 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.708 16:59:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:57.708 ************************************ 00:08:57.708 END TEST iscsi_tgt_filesystem_xfs 00:08:57.708 ************************************ 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@148 -- # rm -rf /mnt/device 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@152 -- # iscsicleanup 00:08:57.708 Cleaning up iSCSI connection 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:08:57.708 Logging out of session [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:57.708 Logout of [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@985 -- # rm -rf 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@153 -- # remove_backends 00:08:57.708 INFO: Removing lvol bdev 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@17 -- # echo 'INFO: Removing lvol bdev' 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@18 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:57.708 [2024-07-25 16:59:50.126693] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (044a9879-b05f-4ecb-981c-7692821a3458) received event(SPDK_BDEV_EVENT_REMOVE) 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.708 INFO: Removing lvol stores 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@20 -- # echo 'INFO: Removing lvol stores' 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@21 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.708 INFO: Removing NVMe 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@23 -- # echo 'INFO: Removing NVMe' 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@24 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.708 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:57.966 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.966 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@26 -- # return 0 00:08:57.966 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@154 -- # killprocess 62652 00:08:57.966 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@950 -- # '[' -z 62652 ']' 00:08:57.966 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # kill -0 62652 00:08:57.966 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@955 -- # uname 00:08:57.966 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.966 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62652 00:08:57.966 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:57.966 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:57.966 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62652' 00:08:57.966 killing process with pid 62652 00:08:57.966 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@969 -- # kill 62652 00:08:57.966 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@974 -- # wait 62652 00:08:58.224 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@155 -- # iscsitestfini 00:08:58.224 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:08:58.224 ************************************ 00:08:58.224 END TEST iscsi_tgt_filesystem 00:08:58.224 ************************************ 00:08:58.224 00:08:58.224 real 0m7.205s 00:08:58.224 user 0m26.195s 00:08:58.224 sys 0m1.544s 00:08:58.224 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.224 16:59:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:58.224 16:59:50 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@32 -- # run_test chap_during_discovery /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:08:58.224 16:59:50 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.224 16:59:50 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.224 16:59:50 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:08:58.224 ************************************ 00:08:58.224 START TEST chap_during_discovery 00:08:58.224 ************************************ 00:08:58.224 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:08:58.483 * Looking for test storage... 00:08:58.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@13 -- # USER=chapo 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@14 -- # MUSER=mchapo 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@15 -- # PASS=123456789123 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@16 -- # MPASS=321978654321 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@19 -- # iscsitestinit 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@21 -- # set_up_iscsi_target 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@142 -- # pid=63095 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 63095' 00:08:58.483 iSCSI target launched. pid: 63095 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@145 -- # waitforlisten 63095 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@831 -- # '[' -z 63095 ']' 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.483 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.484 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.484 [2024-07-25 16:59:50.870490] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:58.484 [2024-07-25 16:59:50.870561] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63095 ] 00:08:58.742 [2024-07-25 16:59:51.115518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.742 [2024-07-25 16:59:51.192621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.309 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.309 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@864 -- # return 0 00:08:59.309 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:08:59.309 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.309 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.309 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.309 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:08:59.309 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.309 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.567 iscsi_tgt is listening. Running tests... 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.567 Malloc0 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.567 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@155 -- # sleep 1 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:09:00.502 configuring target for bideerctional authentication 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@24 -- # echo 'configuring target for bideerctional authentication' 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@95 -- # '[' 0 -eq 1 ']' 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 1 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.502 16:59:52 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.760 executing discovery without adding credential to initiator - we expect failure 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@27 -- # rc=0 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:09:00.760 iscsiadm: Login failed to authenticate with target 00:09:00.760 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:09:00.760 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # rc=24 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@29 -- # '[' 24 -eq 0 ']' 00:09:00.760 configuring initiator for bideerctional authentication 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@35 -- # echo 'configuring initiator for bideerctional authentication' 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@36 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:09:00.760 16:59:52 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:09:00.760 iscsiadm: No matching sessions found 00:09:00.760 16:59:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:09:00.760 16:59:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:09:00.760 iscsiadm: No records found 00:09:00.760 16:59:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # true 00:09:00.760 16:59:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:09:00.760 16:59:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:09:00.760 16:59:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:09:00.760 16:59:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:09:00.760 16:59:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:09:00.760 16:59:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:09:00.760 16:59:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:09:00.760 16:59:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:09:00.760 16:59:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:09:00.760 16:59:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:09:00.760 16:59:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:09:00.760 16:59:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:09:04.041 16:59:56 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:09:04.041 16:59:56 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:09:04.974 16:59:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:04.974 16:59:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@116 -- # '[' 0 -eq 1 ']' 00:09:04.974 16:59:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:09:04.974 16:59:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:09:04.974 16:59:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:09:04.974 16:59:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:09:04.974 16:59:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:09:04.974 16:59:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:09:04.974 16:59:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:09:04.974 16:59:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:09:04.975 16:59:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:09:04.975 16:59:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@135 -- # restart_iscsid 00:09:04.975 16:59:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:09:08.255 17:00:00 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:09:08.255 17:00:00 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:09:08.822 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:09:08.822 executing discovery with adding credential to initiator 00:09:08.822 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@37 -- # echo 'executing discovery with adding credential to initiator' 00:09:08.822 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@38 -- # rc=0 00:09:08.822 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@39 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:09:09.079 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:09:09.079 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@40 -- # '[' 0 -ne 0 ']' 00:09:09.079 DONE 00:09:09.079 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@44 -- # echo DONE 00:09:09.079 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@45 -- # default_initiator_chap_credentials 00:09:09.079 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:09:09.079 iscsiadm: No matching sessions found 00:09:09.079 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:09:09.079 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:09:09.079 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:09:09.079 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:09:09.080 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:09:09.080 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:09:09.080 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:09:09.080 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:09:09.080 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:09:09.080 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:09:09.080 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:09:09.080 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:09:09.080 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:09:09.080 17:00:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:09:12.359 17:00:04 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:09:12.359 17:00:04 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:09:13.299 17:00:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:13.299 17:00:05 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:09:13.299 17:00:05 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@49 -- # killprocess 63095 00:09:13.299 17:00:05 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@950 -- # '[' -z 63095 ']' 00:09:13.299 17:00:05 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # kill -0 63095 00:09:13.299 17:00:05 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@955 -- # uname 00:09:13.299 17:00:05 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.299 17:00:05 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63095 00:09:13.299 killing process with pid 63095 00:09:13.299 17:00:05 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:13.299 17:00:05 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:13.299 17:00:05 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63095' 00:09:13.299 17:00:05 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@969 -- # kill 63095 00:09:13.299 17:00:05 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@974 -- # wait 63095 00:09:13.558 17:00:05 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@51 -- # iscsitestfini 00:09:13.558 17:00:05 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:09:13.558 00:09:13.558 real 0m15.183s 00:09:13.558 user 0m15.154s 00:09:13.558 sys 0m0.732s 00:09:13.558 17:00:05 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.558 ************************************ 00:09:13.558 END TEST chap_during_discovery 00:09:13.558 ************************************ 00:09:13.558 17:00:05 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:13.558 17:00:05 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@33 -- # run_test chap_mutual_auth /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:09:13.558 17:00:05 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:13.558 17:00:05 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.558 17:00:05 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:09:13.558 ************************************ 00:09:13.558 START TEST chap_mutual_auth 00:09:13.558 ************************************ 00:09:13.558 17:00:05 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:09:13.821 * Looking for test storage... 00:09:13.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@13 -- # USER=chapo 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@14 -- # MUSER=mchapo 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@15 -- # PASS=123456789123 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@16 -- # MPASS=321978654321 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@19 -- # iscsitestinit 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@21 -- # set_up_iscsi_target 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@142 -- # pid=63366 00:09:13.821 iSCSI target launched. pid: 63366 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 63366' 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@145 -- # waitforlisten 63366 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@831 -- # '[' -z 63366 ']' 00:09:13.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.821 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.822 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.822 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:13.822 [2024-07-25 17:00:06.126986] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:13.822 [2024-07-25 17:00:06.127065] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63366 ] 00:09:14.083 [2024-07-25 17:00:06.372278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.083 [2024-07-25 17:00:06.451410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.651 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.651 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@864 -- # return 0 00:09:14.651 17:00:07 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:09:14.651 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.651 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:14.651 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.651 17:00:07 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:09:14.651 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.651 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.910 iscsi_tgt is listening. Running tests... 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:14.910 Malloc0 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.910 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:14.911 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.911 17:00:07 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@155 -- # sleep 1 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:09:15.919 configuring target for authentication 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@24 -- # echo 'configuring target for authentication' 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:09:15.919 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 0 -eq 1 ']' 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@99 -- # rpc_cmd iscsi_target_node_set_auth -g 1 -r iqn.2016-06.io.spdk:disk1 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 0 -eq 1 ']' 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@106 -- # rpc_cmd iscsi_set_discovery_auth -r -g 1 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.920 executing discovery without adding credential to initiator - we expect failure 00:09:15.920 configuring initiator with biderectional authentication 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@28 -- # echo 'configuring initiator with biderectional authentication' 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@29 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:09:15.920 iscsiadm: No matching sessions found 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # true 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:09:15.920 iscsiadm: No records found 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # true 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:09:15.920 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:09:16.178 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:09:16.179 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:09:16.179 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:09:16.179 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:09:16.179 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:09:16.179 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:09:19.474 17:00:11 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:09:19.474 17:00:11 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:09:20.040 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:20.040 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@116 -- # '[' 1 -eq 1 ']' 00:09:20.040 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@117 -- # sed -i 's/#node.session.auth.authmethod = CHAP/node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:09:20.040 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@118 -- # sed -i 's/#node.session.auth.username =.*/node.session.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@119 -- # sed -i 's/#node.session.auth.password =.*/node.session.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' 1 -eq 1 ']' 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n 321978654321 ']' 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n mchapo ']' 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@121 -- # sed -i 's/#node.session.auth.username_in =.*/node.session.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@122 -- # sed -i 's/#node.session.auth.password_in =.*/node.session.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@135 -- # restart_iscsid 00:09:20.298 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:09:23.586 17:00:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:09:23.586 17:00:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:09:24.520 executing discovery - target should not be discovered since the -m option was not used 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@30 -- # echo 'executing discovery - target should not be discovered since the -m option was not used' 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@31 -- # rc=0 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:09:24.520 [2024-07-25 17:00:16.656148] iscsi.c: 982:iscsi_auth_params: *ERROR*: Initiator wants to use mutual CHAP for security, but it's not enabled. 00:09:24.520 [2024-07-25 17:00:16.656188] iscsi.c:1957:iscsi_op_login_rsp_handle_csg_bit: *ERROR*: iscsi_auth_params() failed 00:09:24.520 iscsiadm: Login failed to authenticate with target 00:09:24.520 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:09:24.520 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # rc=24 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@33 -- # '[' 24 -eq 0 ']' 00:09:24.520 configuring target for authentication with the -m option 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@37 -- # echo 'configuring target for authentication with the -m option' 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@38 -- # config_chap_credentials_for_target -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=2 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 2 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 2 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 1 -eq 1 ']' 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@97 -- # rpc_cmd iscsi_target_node_set_auth -g 2 -r -m iqn.2016-06.io.spdk:disk1 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 2 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.520 executing discovery: 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@39 -- # echo 'executing discovery:' 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@40 -- # rc=0 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@41 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:09:24.520 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:09:24.520 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@42 -- # '[' 0 -ne 0 ']' 00:09:24.520 executing login: 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@46 -- # echo 'executing login:' 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@47 -- # rc=0 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@48 -- # iscsiadm -m node -l -p 10.0.0.1:3260 00:09:24.521 Logging in to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:09:24.521 Login to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:09:24.521 DONE 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@49 -- # '[' 0 -ne 0 ']' 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@54 -- # echo DONE 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@55 -- # default_initiator_chap_credentials 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:09:24.521 [2024-07-25 17:00:16.797751] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:24.521 Logging out of session [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:09:24.521 Logout of [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:09:24.521 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:09:27.833 17:00:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:09:27.833 17:00:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:09:28.775 17:00:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:28.775 17:00:20 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@57 -- # trap - SIGINT SIGTERM EXIT 00:09:28.775 17:00:20 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@59 -- # killprocess 63366 00:09:28.775 17:00:20 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@950 -- # '[' -z 63366 ']' 00:09:28.775 17:00:20 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # kill -0 63366 00:09:28.775 17:00:20 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@955 -- # uname 00:09:28.775 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:28.775 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63366 00:09:28.775 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:28.775 killing process with pid 63366 00:09:28.775 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:28.775 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63366' 00:09:28.775 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@969 -- # kill 63366 00:09:28.775 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@974 -- # wait 63366 00:09:29.033 17:00:21 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@61 -- # iscsitestfini 00:09:29.033 17:00:21 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:09:29.033 00:09:29.033 real 0m15.444s 00:09:29.033 user 0m15.550s 00:09:29.033 sys 0m0.715s 00:09:29.033 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.033 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:09:29.033 ************************************ 00:09:29.033 END TEST chap_mutual_auth 00:09:29.033 ************************************ 00:09:29.033 17:00:21 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@34 -- # run_test iscsi_tgt_reset /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:09:29.033 17:00:21 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:29.033 17:00:21 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.033 17:00:21 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:09:29.033 ************************************ 00:09:29.033 START TEST iscsi_tgt_reset 00:09:29.033 ************************************ 00:09:29.034 17:00:21 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:09:29.291 * Looking for test storage... 00:09:29.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@11 -- # iscsitestinit 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:09:29.291 17:00:21 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@18 -- # hash sg_reset 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@22 -- # timing_enter start_iscsi_tgt 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@25 -- # pid=63667 00:09:29.292 Process pid: 63667 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@26 -- # echo 'Process pid: 63667' 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@28 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@30 -- # waitforlisten 63667 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@831 -- # '[' -z 63667 ']' 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.292 17:00:21 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:09:29.292 [2024-07-25 17:00:21.639693] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:29.292 [2024-07-25 17:00:21.639764] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63667 ] 00:09:29.550 [2024-07-25 17:00:21.782246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.550 [2024-07-25 17:00:21.867965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.115 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.115 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@864 -- # return 0 00:09:30.115 17:00:22 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@31 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:09:30.115 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.115 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:09:30.115 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.115 17:00:22 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@32 -- # rpc_cmd framework_start_init 00:09:30.115 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.115 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.372 iscsi_tgt is listening. Running tests... 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@33 -- # echo 'iscsi_tgt is listening. Running tests...' 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@35 -- # timing_exit start_iscsi_tgt 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@37 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@38 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@39 -- # rpc_cmd bdev_malloc_create 64 512 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:09:30.372 Malloc0 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@44 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.372 17:00:22 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@45 -- # sleep 1 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@47 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:09:31.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@48 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:09:31.745 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:09:31.745 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@49 -- # waitforiscsidevices 1 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@116 -- # local num=1 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:09:31.745 [2024-07-25 17:00:23.877732] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # n=1 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@123 -- # return 0 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # iscsiadm -m session -P 3 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # grep 'Attached scsi disk' 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # awk '{print $4}' 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # dev=sda 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@54 -- # fiopid=63729 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 60 00:09:31.745 FIO pid: 63729 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@55 -- # echo 'FIO pid: 63729' 00:09:31.745 17:00:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@57 -- # trap 'iscsicleanup; killprocess $pid; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:09:31.746 17:00:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:09:31.746 17:00:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:09:31.746 [global] 00:09:31.746 thread=1 00:09:31.746 invalidate=1 00:09:31.746 rw=read 00:09:31.746 time_based=1 00:09:31.746 runtime=60 00:09:31.746 ioengine=libaio 00:09:31.746 direct=1 00:09:31.746 bs=512 00:09:31.746 iodepth=1 00:09:31.746 norandommap=1 00:09:31.746 numjobs=1 00:09:31.746 00:09:31.746 [job0] 00:09:31.746 filename=/dev/sda 00:09:31.746 queue_depth set to 113 (sda) 00:09:31.746 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:09:31.746 fio-3.35 00:09:31.746 Starting 1 thread 00:09:32.681 17:00:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 63667 00:09:32.681 17:00:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 63729 00:09:32.681 17:00:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:09:32.681 [2024-07-25 17:00:24.907226] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:09:32.681 [2024-07-25 17:00:24.907287] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:09:32.681 17:00:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:09:32.681 [2024-07-25 17:00:24.909102] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.617 17:00:25 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 63667 00:09:33.617 17:00:25 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 63729 00:09:33.617 17:00:25 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:09:33.617 17:00:25 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:09:34.552 17:00:26 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 63667 00:09:34.552 17:00:26 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 63729 00:09:34.552 17:00:26 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:09:34.552 [2024-07-25 17:00:26.914368] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:09:34.552 [2024-07-25 17:00:26.914429] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:09:34.552 17:00:26 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:09:34.552 [2024-07-25 17:00:26.915930] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.487 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 63667 00:09:35.487 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 63729 00:09:35.487 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:09:35.487 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:09:36.859 17:00:28 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 63667 00:09:36.859 17:00:28 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 63729 00:09:36.859 17:00:28 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:09:36.859 [2024-07-25 17:00:28.923658] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:09:36.859 [2024-07-25 17:00:28.923722] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:09:36.859 17:00:28 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:09:36.859 [2024-07-25 17:00:28.925242] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:37.543 17:00:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 63667 00:09:37.543 17:00:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 63729 00:09:37.543 17:00:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@70 -- # kill 63729 00:09:37.543 17:00:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # wait 63729 00:09:37.543 17:00:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # true 00:09:37.543 17:00:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@73 -- # trap - SIGINT SIGTERM EXIT 00:09:37.543 17:00:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@75 -- # iscsicleanup 00:09:37.543 Cleaning up iSCSI connection 00:09:37.543 17:00:29 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:09:37.543 17:00:29 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:09:37.543 fio: pid=63755, err=19/file:io_u.c:1889, func=io_u error, error=No such device 00:09:37.543 fio: io_u error on file /dev/sda: No such device: read offset=51172352, buflen=512 00:09:37.543 00:09:37.543 job0: (groupid=0, jobs=1): err=19 (file:io_u.c:1889, func=io_u error, error=No such device): pid=63755: Thu Jul 25 17:00:29 2024 00:09:37.543 read: IOPS=17.5k, BW=8744KiB/s (8954kB/s)(48.8MiB/5715msec) 00:09:37.543 slat (usec): min=2, max=1208, avg= 5.42, stdev= 4.07 00:09:37.543 clat (nsec): min=1427, max=3703.4k, avg=51295.67, stdev=50591.64 00:09:37.543 lat (usec): min=44, max=3707, avg=56.70, stdev=50.71 00:09:37.543 clat percentiles (usec): 00:09:37.543 | 1.00th=[ 45], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 48], 00:09:37.543 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 48], 60.00th=[ 48], 00:09:37.543 | 70.00th=[ 50], 80.00th=[ 52], 90.00th=[ 57], 95.00th=[ 63], 00:09:37.543 | 99.00th=[ 73], 99.50th=[ 79], 99.90th=[ 223], 99.95th=[ 807], 00:09:37.543 | 99.99th=[ 2638] 00:09:37.543 bw ( KiB/s): min= 7941, max= 9295, per=99.91%, avg=8736.18, stdev=409.09, samples=11 00:09:37.543 iops : min=15882, max=18590, avg=17472.36, stdev=818.17, samples=11 00:09:37.543 lat (usec) : 2=0.01%, 50=71.68%, 100=28.15%, 250=0.07%, 500=0.02% 00:09:37.543 lat (usec) : 750=0.02%, 1000=0.02% 00:09:37.543 lat (msec) : 2=0.01%, 4=0.03% 00:09:37.543 cpu : usr=4.22%, sys=13.32%, ctx=99963, majf=0, minf=2 00:09:37.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.543 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.543 issued rwts: total=99947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.543 00:09:37.543 Run status group 0 (all jobs): 00:09:37.543 READ: bw=8744KiB/s (8954kB/s), 8744KiB/s-8744KiB/s (8954kB/s-8954kB/s), io=48.8MiB (51.2MB), run=5715-5715msec 00:09:37.543 00:09:37.543 Disk stats (read/write): 00:09:37.543 sda: ios=98772/0, merge=0/0, ticks=4897/0, in_queue=4897, util=97.30% 00:09:37.543 Logging out of session [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:09:37.543 Logout of [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:09:37.543 17:00:29 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:09:37.543 17:00:29 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@985 -- # rm -rf 00:09:37.543 17:00:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@76 -- # killprocess 63667 00:09:37.543 17:00:29 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@950 -- # '[' -z 63667 ']' 00:09:37.543 17:00:29 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # kill -0 63667 00:09:37.543 17:00:29 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@955 -- # uname 00:09:37.543 17:00:30 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.543 17:00:30 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63667 00:09:37.801 17:00:30 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:37.801 17:00:30 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:37.801 killing process with pid 63667 00:09:37.801 17:00:30 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63667' 00:09:37.801 17:00:30 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@969 -- # kill 63667 00:09:37.801 17:00:30 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@974 -- # wait 63667 00:09:38.059 17:00:30 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@77 -- # iscsitestfini 00:09:38.059 17:00:30 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:09:38.059 00:09:38.059 real 0m8.947s 00:09:38.059 user 0m6.472s 00:09:38.059 sys 0m2.256s 00:09:38.059 ************************************ 00:09:38.059 END TEST iscsi_tgt_reset 00:09:38.059 17:00:30 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.059 17:00:30 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:09:38.059 ************************************ 00:09:38.059 17:00:30 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@35 -- # run_test iscsi_tgt_rpc_config /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:09:38.059 17:00:30 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:38.059 17:00:30 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.059 17:00:30 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:09:38.059 ************************************ 00:09:38.059 START TEST iscsi_tgt_rpc_config 00:09:38.059 ************************************ 00:09:38.059 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:09:38.317 * Looking for test storage... 00:09:38.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config 00:09:38.317 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:09:38.317 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:09:38.317 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@11 -- # iscsitestinit 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@16 -- # rpc_config_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@18 -- # timing_enter start_iscsi_tgt 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@21 -- # pid=63906 00:09:38.318 Process pid: 63906 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@22 -- # echo 'Process pid: 63906' 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@24 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@20 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@26 -- # waitforlisten 63906 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@831 -- # '[' -z 63906 ']' 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.318 17:00:30 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:09:38.318 [2024-07-25 17:00:30.643624] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:38.318 [2024-07-25 17:00:30.643693] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63906 ] 00:09:38.318 [2024-07-25 17:00:30.784609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.576 [2024-07-25 17:00:30.871511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.144 17:00:31 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.144 17:00:31 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@864 -- # return 0 00:09:39.144 17:00:31 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:09:39.144 17:00:31 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@28 -- # rpc_wait_pid=63922 00:09:39.144 17:00:31 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:09:39.401 17:00:31 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@32 -- # ps 63922 00:09:39.401 PID TTY STAT TIME COMMAND 00:09:39.401 63922 ? S 0:00 python3 /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:09:39.401 17:00:31 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:09:39.657 17:00:32 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@35 -- # sleep 1 00:09:41.027 iscsi_tgt is listening. Running tests... 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@36 -- # echo 'iscsi_tgt is listening. Running tests...' 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@39 -- # NOT ps 63922 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # local es=0 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@652 -- # valid_exec_arg ps 63922 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@638 -- # local arg=ps 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -t ps 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # type -P ps 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # arg=/usr/bin/ps 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/ps ]] 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@653 -- # ps 63922 00:09:41.027 PID TTY STAT TIME COMMAND 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@653 -- # es=1 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@43 -- # rpc_wait_pid=63947 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@44 -- # sleep 1 00:09:41.027 17:00:33 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@45 -- # NOT ps 63947 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # local es=0 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@652 -- # valid_exec_arg ps 63947 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@638 -- # local arg=ps 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -t ps 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # type -P ps 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # arg=/usr/bin/ps 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/ps ]] 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@653 -- # ps 63947 00:09:41.957 PID TTY STAT TIME COMMAND 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@653 -- # es=1 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@47 -- # timing_exit start_iscsi_tgt 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:09:41.957 17:00:34 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@49 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py /home/vagrant/spdk_repo/spdk/scripts/rpc.py 10.0.0.1 10.0.0.2 3260 10.0.0.2/32 spdk_iscsi_ns 00:10:03.876 [2024-07-25 17:00:55.858733] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:06.408 [2024-07-25 17:00:58.292563] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:07.346 verify_log_flag_rpc_methods passed 00:10:07.346 create_malloc_bdevs_rpc_methods passed 00:10:07.346 verify_portal_groups_rpc_methods passed 00:10:07.346 verify_initiator_groups_rpc_method passed. 00:10:07.346 This issue will be fixed later. 00:10:07.346 verify_target_nodes_rpc_methods passed. 00:10:07.346 verify_scsi_devices_rpc_methods passed 00:10:07.346 verify_iscsi_connection_rpc_methods passed 00:10:07.346 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:10:07.346 [ 00:10:07.346 { 00:10:07.346 "name": "Malloc0", 00:10:07.346 "aliases": [ 00:10:07.346 "a7e7011a-9700-4e1c-9977-a96a7a040be6" 00:10:07.346 ], 00:10:07.346 "product_name": "Malloc disk", 00:10:07.346 "block_size": 512, 00:10:07.346 "num_blocks": 131072, 00:10:07.346 "uuid": "a7e7011a-9700-4e1c-9977-a96a7a040be6", 00:10:07.346 "assigned_rate_limits": { 00:10:07.346 "rw_ios_per_sec": 0, 00:10:07.346 "rw_mbytes_per_sec": 0, 00:10:07.346 "r_mbytes_per_sec": 0, 00:10:07.346 "w_mbytes_per_sec": 0 00:10:07.346 }, 00:10:07.346 "claimed": false, 00:10:07.346 "zoned": false, 00:10:07.346 "supported_io_types": { 00:10:07.346 "read": true, 00:10:07.346 "write": true, 00:10:07.346 "unmap": true, 00:10:07.346 "flush": true, 00:10:07.346 "reset": true, 00:10:07.346 "nvme_admin": false, 00:10:07.346 "nvme_io": false, 00:10:07.346 "nvme_io_md": false, 00:10:07.346 "write_zeroes": true, 00:10:07.346 "zcopy": true, 00:10:07.346 "get_zone_info": false, 00:10:07.346 "zone_management": false, 00:10:07.346 "zone_append": false, 00:10:07.346 "compare": false, 00:10:07.346 "compare_and_write": false, 00:10:07.346 "abort": true, 00:10:07.346 "seek_hole": false, 00:10:07.346 "seek_data": false, 00:10:07.346 "copy": true, 00:10:07.346 "nvme_iov_md": false 00:10:07.346 }, 00:10:07.346 "memory_domains": [ 00:10:07.346 { 00:10:07.346 "dma_device_id": "system", 00:10:07.346 "dma_device_type": 1 00:10:07.346 }, 00:10:07.346 { 00:10:07.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.346 "dma_device_type": 2 00:10:07.346 } 00:10:07.346 ], 00:10:07.346 "driver_specific": {} 00:10:07.346 }, 00:10:07.346 { 00:10:07.346 "name": "Malloc1", 00:10:07.346 "aliases": [ 00:10:07.346 "e9d63801-b33f-4576-91ec-abcac2e7c2a1" 00:10:07.346 ], 00:10:07.346 "product_name": "Malloc disk", 00:10:07.346 "block_size": 512, 00:10:07.346 "num_blocks": 131072, 00:10:07.346 "uuid": "e9d63801-b33f-4576-91ec-abcac2e7c2a1", 00:10:07.346 "assigned_rate_limits": { 00:10:07.346 "rw_ios_per_sec": 0, 00:10:07.346 "rw_mbytes_per_sec": 0, 00:10:07.346 "r_mbytes_per_sec": 0, 00:10:07.346 "w_mbytes_per_sec": 0 00:10:07.346 }, 00:10:07.346 "claimed": false, 00:10:07.346 "zoned": false, 00:10:07.346 "supported_io_types": { 00:10:07.346 "read": true, 00:10:07.346 "write": true, 00:10:07.346 "unmap": true, 00:10:07.346 "flush": true, 00:10:07.346 "reset": true, 00:10:07.346 "nvme_admin": false, 00:10:07.346 "nvme_io": false, 00:10:07.346 "nvme_io_md": false, 00:10:07.346 "write_zeroes": true, 00:10:07.346 "zcopy": true, 00:10:07.346 "get_zone_info": false, 00:10:07.346 "zone_management": false, 00:10:07.346 "zone_append": false, 00:10:07.346 "compare": false, 00:10:07.346 "compare_and_write": false, 00:10:07.346 "abort": true, 00:10:07.346 "seek_hole": false, 00:10:07.346 "seek_data": false, 00:10:07.346 "copy": true, 00:10:07.346 "nvme_iov_md": false 00:10:07.346 }, 00:10:07.346 "memory_domains": [ 00:10:07.346 { 00:10:07.346 "dma_device_id": "system", 00:10:07.346 "dma_device_type": 1 00:10:07.346 }, 00:10:07.346 { 00:10:07.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.346 "dma_device_type": 2 00:10:07.346 } 00:10:07.346 ], 00:10:07.346 "driver_specific": {} 00:10:07.346 }, 00:10:07.346 { 00:10:07.346 "name": "Malloc2", 00:10:07.346 "aliases": [ 00:10:07.346 "c2a3c053-b6cd-4463-a39c-20c11f8c63c7" 00:10:07.346 ], 00:10:07.346 "product_name": "Malloc disk", 00:10:07.346 "block_size": 512, 00:10:07.346 "num_blocks": 131072, 00:10:07.346 "uuid": "c2a3c053-b6cd-4463-a39c-20c11f8c63c7", 00:10:07.346 "assigned_rate_limits": { 00:10:07.347 "rw_ios_per_sec": 0, 00:10:07.347 "rw_mbytes_per_sec": 0, 00:10:07.347 "r_mbytes_per_sec": 0, 00:10:07.347 "w_mbytes_per_sec": 0 00:10:07.347 }, 00:10:07.347 "claimed": false, 00:10:07.347 "zoned": false, 00:10:07.347 "supported_io_types": { 00:10:07.347 "read": true, 00:10:07.347 "write": true, 00:10:07.347 "unmap": true, 00:10:07.347 "flush": true, 00:10:07.347 "reset": true, 00:10:07.347 "nvme_admin": false, 00:10:07.347 "nvme_io": false, 00:10:07.347 "nvme_io_md": false, 00:10:07.347 "write_zeroes": true, 00:10:07.347 "zcopy": true, 00:10:07.347 "get_zone_info": false, 00:10:07.347 "zone_management": false, 00:10:07.347 "zone_append": false, 00:10:07.347 "compare": false, 00:10:07.347 "compare_and_write": false, 00:10:07.347 "abort": true, 00:10:07.347 "seek_hole": false, 00:10:07.347 "seek_data": false, 00:10:07.347 "copy": true, 00:10:07.347 "nvme_iov_md": false 00:10:07.347 }, 00:10:07.347 "memory_domains": [ 00:10:07.347 { 00:10:07.347 "dma_device_id": "system", 00:10:07.347 "dma_device_type": 1 00:10:07.347 }, 00:10:07.347 { 00:10:07.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.347 "dma_device_type": 2 00:10:07.347 } 00:10:07.347 ], 00:10:07.347 "driver_specific": {} 00:10:07.347 }, 00:10:07.347 { 00:10:07.347 "name": "Malloc3", 00:10:07.347 "aliases": [ 00:10:07.347 "abcb988e-566e-4d1e-b4e1-ca5c6581033e" 00:10:07.347 ], 00:10:07.347 "product_name": "Malloc disk", 00:10:07.347 "block_size": 512, 00:10:07.347 "num_blocks": 131072, 00:10:07.347 "uuid": "abcb988e-566e-4d1e-b4e1-ca5c6581033e", 00:10:07.347 "assigned_rate_limits": { 00:10:07.347 "rw_ios_per_sec": 0, 00:10:07.347 "rw_mbytes_per_sec": 0, 00:10:07.347 "r_mbytes_per_sec": 0, 00:10:07.347 "w_mbytes_per_sec": 0 00:10:07.347 }, 00:10:07.347 "claimed": false, 00:10:07.347 "zoned": false, 00:10:07.347 "supported_io_types": { 00:10:07.347 "read": true, 00:10:07.347 "write": true, 00:10:07.347 "unmap": true, 00:10:07.347 "flush": true, 00:10:07.347 "reset": true, 00:10:07.347 "nvme_admin": false, 00:10:07.347 "nvme_io": false, 00:10:07.347 "nvme_io_md": false, 00:10:07.347 "write_zeroes": true, 00:10:07.347 "zcopy": true, 00:10:07.347 "get_zone_info": false, 00:10:07.347 "zone_management": false, 00:10:07.347 "zone_append": false, 00:10:07.347 "compare": false, 00:10:07.347 "compare_and_write": false, 00:10:07.347 "abort": true, 00:10:07.347 "seek_hole": false, 00:10:07.347 "seek_data": false, 00:10:07.347 "copy": true, 00:10:07.347 "nvme_iov_md": false 00:10:07.347 }, 00:10:07.347 "memory_domains": [ 00:10:07.347 { 00:10:07.347 "dma_device_id": "system", 00:10:07.347 "dma_device_type": 1 00:10:07.347 }, 00:10:07.347 { 00:10:07.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.347 "dma_device_type": 2 00:10:07.347 } 00:10:07.347 ], 00:10:07.347 "driver_specific": {} 00:10:07.347 }, 00:10:07.347 { 00:10:07.347 "name": "Malloc4", 00:10:07.347 "aliases": [ 00:10:07.347 "dadea76b-1c0c-4954-b104-05f6de59aeba" 00:10:07.347 ], 00:10:07.347 "product_name": "Malloc disk", 00:10:07.347 "block_size": 512, 00:10:07.347 "num_blocks": 131072, 00:10:07.347 "uuid": "dadea76b-1c0c-4954-b104-05f6de59aeba", 00:10:07.347 "assigned_rate_limits": { 00:10:07.347 "rw_ios_per_sec": 0, 00:10:07.347 "rw_mbytes_per_sec": 0, 00:10:07.347 "r_mbytes_per_sec": 0, 00:10:07.347 "w_mbytes_per_sec": 0 00:10:07.347 }, 00:10:07.347 "claimed": false, 00:10:07.347 "zoned": false, 00:10:07.347 "supported_io_types": { 00:10:07.347 "read": true, 00:10:07.347 "write": true, 00:10:07.347 "unmap": true, 00:10:07.347 "flush": true, 00:10:07.347 "reset": true, 00:10:07.347 "nvme_admin": false, 00:10:07.347 "nvme_io": false, 00:10:07.347 "nvme_io_md": false, 00:10:07.347 "write_zeroes": true, 00:10:07.347 "zcopy": true, 00:10:07.347 "get_zone_info": false, 00:10:07.347 "zone_management": false, 00:10:07.347 "zone_append": false, 00:10:07.347 "compare": false, 00:10:07.347 "compare_and_write": false, 00:10:07.347 "abort": true, 00:10:07.347 "seek_hole": false, 00:10:07.347 "seek_data": false, 00:10:07.347 "copy": true, 00:10:07.347 "nvme_iov_md": false 00:10:07.347 }, 00:10:07.347 "memory_domains": [ 00:10:07.347 { 00:10:07.347 "dma_device_id": "system", 00:10:07.347 "dma_device_type": 1 00:10:07.347 }, 00:10:07.347 { 00:10:07.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.347 "dma_device_type": 2 00:10:07.347 } 00:10:07.347 ], 00:10:07.347 "driver_specific": {} 00:10:07.347 }, 00:10:07.347 { 00:10:07.347 "name": "Malloc5", 00:10:07.347 "aliases": [ 00:10:07.347 "9226fa5f-f6dc-4b2b-8fc3-28fa16460b43" 00:10:07.347 ], 00:10:07.347 "product_name": "Malloc disk", 00:10:07.347 "block_size": 512, 00:10:07.347 "num_blocks": 131072, 00:10:07.347 "uuid": "9226fa5f-f6dc-4b2b-8fc3-28fa16460b43", 00:10:07.347 "assigned_rate_limits": { 00:10:07.347 "rw_ios_per_sec": 0, 00:10:07.347 "rw_mbytes_per_sec": 0, 00:10:07.347 "r_mbytes_per_sec": 0, 00:10:07.347 "w_mbytes_per_sec": 0 00:10:07.347 }, 00:10:07.347 "claimed": false, 00:10:07.347 "zoned": false, 00:10:07.347 "supported_io_types": { 00:10:07.347 "read": true, 00:10:07.347 "write": true, 00:10:07.347 "unmap": true, 00:10:07.347 "flush": true, 00:10:07.347 "reset": true, 00:10:07.347 "nvme_admin": false, 00:10:07.347 "nvme_io": false, 00:10:07.347 "nvme_io_md": false, 00:10:07.347 "write_zeroes": true, 00:10:07.347 "zcopy": true, 00:10:07.347 "get_zone_info": false, 00:10:07.347 "zone_management": false, 00:10:07.347 "zone_append": false, 00:10:07.347 "compare": false, 00:10:07.347 "compare_and_write": false, 00:10:07.347 "abort": true, 00:10:07.347 "seek_hole": false, 00:10:07.347 "seek_data": false, 00:10:07.347 "copy": true, 00:10:07.347 "nvme_iov_md": false 00:10:07.347 }, 00:10:07.347 "memory_domains": [ 00:10:07.347 { 00:10:07.347 "dma_device_id": "system", 00:10:07.347 "dma_device_type": 1 00:10:07.347 }, 00:10:07.347 { 00:10:07.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.347 "dma_device_type": 2 00:10:07.347 } 00:10:07.347 ], 00:10:07.347 "driver_specific": {} 00:10:07.347 } 00:10:07.347 ] 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@53 -- # trap - SIGINT SIGTERM EXIT 00:10:07.347 Cleaning up iSCSI connection 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@55 -- # iscsicleanup 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:10:07.347 iscsiadm: No matching sessions found 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@983 -- # true 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:10:07.347 iscsiadm: No records found 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@984 -- # true 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@985 -- # rm -rf 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@56 -- # killprocess 63906 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@950 -- # '[' -z 63906 ']' 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # kill -0 63906 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@955 -- # uname 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63906 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:07.347 killing process with pid 63906 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63906' 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@969 -- # kill 63906 00:10:07.347 17:00:59 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@974 -- # wait 63906 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@58 -- # iscsitestfini 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:10:07.917 00:10:07.917 real 0m29.731s 00:10:07.917 user 0m50.478s 00:10:07.917 sys 0m4.657s 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:10:07.917 ************************************ 00:10:07.917 END TEST iscsi_tgt_rpc_config 00:10:07.917 ************************************ 00:10:07.917 17:01:00 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@36 -- # run_test iscsi_tgt_iscsi_lvol /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:10:07.917 17:01:00 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:07.917 17:01:00 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.917 17:01:00 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:10:07.917 ************************************ 00:10:07.917 START TEST iscsi_tgt_iscsi_lvol 00:10:07.917 ************************************ 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:10:07.917 * Looking for test storage... 00:10:07.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@11 -- # iscsitestinit 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@13 -- # MALLOC_BDEV_SIZE=128 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@15 -- # '[' 0 -eq 1 ']' 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@19 -- # NUM_LVS=2 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@20 -- # NUM_LVOL=2 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@23 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@24 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@26 -- # timing_enter start_iscsi_tgt 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@29 -- # pid=64470 00:10:07.917 Process pid: 64470 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@30 -- # echo 'Process pid: 64470' 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@32 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@28 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@34 -- # waitforlisten 64470 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@831 -- # '[' -z 64470 ']' 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.917 17:01:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:08.176 [2024-07-25 17:01:00.446517] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:08.176 [2024-07-25 17:01:00.446608] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64470 ] 00:10:08.176 [2024-07-25 17:01:00.588531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.435 [2024-07-25 17:01:00.683881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.435 [2024-07-25 17:01:00.684035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.435 [2024-07-25 17:01:00.684036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.435 [2024-07-25 17:01:00.683987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.003 17:01:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.003 17:01:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@864 -- # return 0 00:10:09.003 17:01:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:10:09.003 17:01:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:10:09.571 17:01:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@37 -- # echo 'iscsi_tgt is listening. Running tests...' 00:10:09.571 iscsi_tgt is listening. Running tests... 00:10:09.571 17:01:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@39 -- # timing_exit start_iscsi_tgt 00:10:09.571 17:01:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:09.571 17:01:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:09.571 17:01:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@41 -- # timing_enter setup 00:10:09.571 17:01:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:09.571 17:01:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:09.571 17:01:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:10:09.829 17:01:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # seq 1 2 00:10:09.829 17:01:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:10:09.829 17:01:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=3 00:10:09.830 17:01:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 3 ANY 10.0.0.2/32 00:10:09.830 17:01:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 1 -eq 1 ']' 00:10:09.830 17:01:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:10:10.088 17:01:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # malloc_bdevs='Malloc0 ' 00:10:10.088 17:01:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:10:10.347 17:01:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # malloc_bdevs+=Malloc1 00:10:10.347 17:01:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:10.606 17:01:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@53 -- # bdev=raid0 00:10:10.606 17:01:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs_1 -c 1048576 00:10:10.865 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=14c80f58-ba88-4bff-88e3-0efc8fd9a1ea 00:10:10.865 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:10:10.865 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 2 00:10:10.865 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:10:10.865 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 14c80f58-ba88-4bff-88e3-0efc8fd9a1ea lbd_1 10 00:10:11.123 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a4f783d9-816f-4abc-8188-860102982047 00:10:11.123 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a4f783d9-816f-4abc-8188-860102982047:0 ' 00:10:11.123 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:10:11.123 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 14c80f58-ba88-4bff-88e3-0efc8fd9a1ea lbd_2 10 00:10:11.381 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=fbef2955-24f3-4a0a-9cde-44393c93743f 00:10:11.381 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='fbef2955-24f3-4a0a-9cde-44393c93743f:1 ' 00:10:11.381 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias 'a4f783d9-816f-4abc-8188-860102982047:0 fbef2955-24f3-4a0a-9cde-44393c93743f:1 ' 1:3 256 -d 00:10:11.381 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:10:11.381 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=4 00:10:11.381 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 4 ANY 10.0.0.2/32 00:10:11.640 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 2 -eq 1 ']' 00:10:11.640 17:01:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:10:11.901 17:01:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc2 00:10:11.901 17:01:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc2 lvs_2 -c 1048576 00:10:12.161 17:01:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=37392ef8-d1b5-4e5e-ba48-2d81bf067c9a 00:10:12.161 17:01:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:10:12.161 17:01:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 2 00:10:12.161 17:01:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:10:12.161 17:01:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 37392ef8-d1b5-4e5e-ba48-2d81bf067c9a lbd_1 10 00:10:12.161 17:01:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f47afd9e-47e3-4ad8-a12d-83e183de6b22 00:10:12.161 17:01:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f47afd9e-47e3-4ad8-a12d-83e183de6b22:0 ' 00:10:12.161 17:01:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:10:12.161 17:01:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 37392ef8-d1b5-4e5e-ba48-2d81bf067c9a lbd_2 10 00:10:12.420 17:01:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c4dd97e1-9dda-4802-9ac8-aebd4f508562 00:10:12.420 17:01:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c4dd97e1-9dda-4802-9ac8-aebd4f508562:1 ' 00:10:12.420 17:01:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias 'f47afd9e-47e3-4ad8-a12d-83e183de6b22:0 c4dd97e1-9dda-4802-9ac8-aebd4f508562:1 ' 1:4 256 -d 00:10:12.678 17:01:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@66 -- # timing_exit setup 00:10:12.678 17:01:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:12.678 17:01:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:12.678 17:01:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@68 -- # sleep 1 00:10:13.614 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@70 -- # timing_enter discovery 00:10:13.614 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:13.614 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:13.614 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@71 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:10:13.873 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:10:13.873 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@72 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:10:13.873 [2024-07-25 17:01:06.150555] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:13.873 [2024-07-25 17:01:06.159701] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:13.873 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:13.873 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:10:13.873 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:13.873 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@73 -- # waitforiscsidevices 4 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@116 -- # local num=4 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:13.873 [2024-07-25 17:01:06.178666] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:13.873 [2024-07-25 17:01:06.178672] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # n=4 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@120 -- # '[' 4 -ne 4 ']' 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@123 -- # return 0 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@74 -- # timing_exit discovery 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@76 -- # timing_enter fio 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:13.873 17:01:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 8 -t randwrite -r 10 -v 00:10:13.873 [global] 00:10:13.873 thread=1 00:10:13.873 invalidate=1 00:10:13.873 rw=randwrite 00:10:13.873 time_based=1 00:10:13.873 runtime=10 00:10:13.873 ioengine=libaio 00:10:13.873 direct=1 00:10:13.873 bs=131072 00:10:13.873 iodepth=8 00:10:13.873 norandommap=0 00:10:13.873 numjobs=1 00:10:13.873 00:10:13.873 verify_dump=1 00:10:13.873 verify_backlog=512 00:10:13.873 verify_state_save=0 00:10:13.873 do_verify=1 00:10:13.873 verify=crc32c-intel 00:10:13.873 [job0] 00:10:13.873 filename=/dev/sdb 00:10:13.873 [job1] 00:10:13.873 filename=/dev/sdc 00:10:13.873 [job2] 00:10:13.873 filename=/dev/sda 00:10:13.873 [job3] 00:10:13.873 filename=/dev/sdd 00:10:13.873 queue_depth set to 113 (sdb) 00:10:13.873 queue_depth set to 113 (sdc) 00:10:14.132 queue_depth set to 113 (sda) 00:10:14.132 queue_depth set to 113 (sdd) 00:10:14.132 job0: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:10:14.132 job1: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:10:14.132 job2: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:10:14.132 job3: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:10:14.132 fio-3.35 00:10:14.132 Starting 4 threads 00:10:14.132 [2024-07-25 17:01:06.489993] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:14.132 [2024-07-25 17:01:06.491937] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:14.132 [2024-07-25 17:01:06.494179] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:14.132 [2024-07-25 17:01:06.496439] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:14.392 [2024-07-25 17:01:06.746153] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:14.392 [2024-07-25 17:01:06.768060] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:14.392 [2024-07-25 17:01:06.783825] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:14.392 [2024-07-25 17:01:06.796129] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:14.392 [2024-07-25 17:01:06.829628] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:14.392 [2024-07-25 17:01:06.857007] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:14.651 [2024-07-25 17:01:06.883477] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:14.651 [2024-07-25 17:01:06.900203] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:14.651 [2024-07-25 17:01:07.094617] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:14.910 [2024-07-25 17:01:07.160189] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:14.910 [2024-07-25 17:01:07.193774] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:14.910 [2024-07-25 17:01:07.299925] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:14.910 [2024-07-25 17:01:07.369839] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.169 [2024-07-25 17:01:07.392638] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.169 [2024-07-25 17:01:07.415588] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.169 [2024-07-25 17:01:07.546132] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.169 [2024-07-25 17:01:07.583729] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.169 [2024-07-25 17:01:07.616295] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.428 [2024-07-25 17:01:07.668870] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.428 [2024-07-25 17:01:07.758122] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.428 [2024-07-25 17:01:07.823011] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.428 [2024-07-25 17:01:07.845386] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.428 [2024-07-25 17:01:07.864212] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.688 [2024-07-25 17:01:08.109920] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.688 [2024-07-25 17:01:08.136330] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.688 [2024-07-25 17:01:08.150537] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.948 [2024-07-25 17:01:08.282246] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.948 [2024-07-25 17:01:08.346058] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.948 [2024-07-25 17:01:08.376199] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:15.948 [2024-07-25 17:01:08.402722] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:16.207 [2024-07-25 17:01:08.524282] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:16.207 [2024-07-25 17:01:08.596035] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:16.207 [2024-07-25 17:01:08.616931] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:16.207 [2024-07-25 17:01:08.651019] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:16.207 [2024-07-25 17:01:08.673425] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:16.466 [2024-07-25 17:01:08.837637] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:16.466 [2024-07-25 17:01:08.928609] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:16.725 [2024-07-25 17:01:09.050288] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:16.725 [2024-07-25 17:01:09.143702] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:16.725 [2024-07-25 17:01:09.174609] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:16.984 [2024-07-25 17:01:09.226246] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:16.984 [2024-07-25 17:01:09.251940] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:16.984 [2024-07-25 17:01:09.386427] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:16.984 [2024-07-25 17:01:09.401428] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:16.984 [2024-07-25 17:01:09.421848] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:17.242 [2024-07-25 17:01:09.500944] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:17.242 [2024-07-25 17:01:09.626182] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:17.242 [2024-07-25 17:01:09.653531] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:17.242 [2024-07-25 17:01:09.690711] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:17.500 [2024-07-25 17:01:09.711392] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:17.500 [2024-07-25 17:01:09.759939] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:17.500 [2024-07-25 17:01:09.901026] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:17.501 [2024-07-25 17:01:09.938978] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:17.759 [2024-07-25 17:01:10.054346] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:17.759 [2024-07-25 17:01:10.158912] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:17.759 [2024-07-25 17:01:10.186581] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:17.759 [2024-07-25 17:01:10.211740] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.017 [2024-07-25 17:01:10.260116] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.017 [2024-07-25 17:01:10.401498] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.017 [2024-07-25 17:01:10.421016] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.017 [2024-07-25 17:01:10.435399] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.017 [2024-07-25 17:01:10.468630] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.276 [2024-07-25 17:01:10.603562] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.276 [2024-07-25 17:01:10.643061] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.276 [2024-07-25 17:01:10.668667] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.276 [2024-07-25 17:01:10.712771] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.535 [2024-07-25 17:01:10.765025] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.535 [2024-07-25 17:01:10.874348] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.535 [2024-07-25 17:01:10.901028] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.535 [2024-07-25 17:01:10.922918] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.535 [2024-07-25 17:01:10.964213] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.794 [2024-07-25 17:01:11.049857] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.794 [2024-07-25 17:01:11.093561] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.794 [2024-07-25 17:01:11.120730] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.794 [2024-07-25 17:01:11.170660] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:18.794 [2024-07-25 17:01:11.213719] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:19.052 [2024-07-25 17:01:11.279657] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:19.052 [2024-07-25 17:01:11.358269] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:19.052 [2024-07-25 17:01:11.378558] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:19.052 [2024-07-25 17:01:11.400374] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:19.052 [2024-07-25 17:01:11.504333] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:19.309 [2024-07-25 17:01:11.584956] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:19.310 [2024-07-25 17:01:11.620985] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:19.310 [2024-07-25 17:01:11.636595] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:19.310 [2024-07-25 17:01:11.756085] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:19.568 [2024-07-25 17:01:11.784426] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:19.568 [2024-07-25 17:01:11.889971] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:19.568 [2024-07-25 17:01:11.918854] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:19.568 [2024-07-25 17:01:12.031594] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:19.828 [2024-07-25 17:01:12.039754] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.088 [2024-07-25 17:01:12.344060] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.088 [2024-07-25 17:01:12.367097] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.088 [2024-07-25 17:01:12.397954] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.088 [2024-07-25 17:01:12.414673] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.088 [2024-07-25 17:01:12.439937] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.088 [2024-07-25 17:01:12.461885] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.088 [2024-07-25 17:01:12.491652] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.347 [2024-07-25 17:01:12.614149] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.347 [2024-07-25 17:01:12.740867] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.347 [2024-07-25 17:01:12.763974] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.606 [2024-07-25 17:01:12.822885] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.606 [2024-07-25 17:01:12.855204] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.606 [2024-07-25 17:01:12.936670] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.606 [2024-07-25 17:01:13.012780] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.606 [2024-07-25 17:01:13.063118] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.865 [2024-07-25 17:01:13.108784] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.865 [2024-07-25 17:01:13.196944] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.865 [2024-07-25 17:01:13.294236] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.865 [2024-07-25 17:01:13.315959] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:20.865 [2024-07-25 17:01:13.331665] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:21.132 [2024-07-25 17:01:13.358603] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:21.132 [2024-07-25 17:01:13.459818] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:21.132 [2024-07-25 17:01:13.554975] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:21.394 [2024-07-25 17:01:13.641293] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:21.394 [2024-07-25 17:01:13.684423] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:21.394 [2024-07-25 17:01:13.757711] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:21.394 [2024-07-25 17:01:13.785720] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:21.653 [2024-07-25 17:01:13.895442] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:21.653 [2024-07-25 17:01:13.935982] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:21.653 [2024-07-25 17:01:13.958208] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:21.653 [2024-07-25 17:01:14.045564] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:21.912 [2024-07-25 17:01:14.174226] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:21.912 [2024-07-25 17:01:14.209302] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:21.912 [2024-07-25 17:01:14.239384] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:21.912 [2024-07-25 17:01:14.313646] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:21.912 [2024-07-25 17:01:14.344561] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:22.171 [2024-07-25 17:01:14.498714] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:22.171 [2024-07-25 17:01:14.526814] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:22.171 [2024-07-25 17:01:14.549304] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:22.171 [2024-07-25 17:01:14.570204] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:22.171 [2024-07-25 17:01:14.592426] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:22.430 [2024-07-25 17:01:14.795874] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:22.430 [2024-07-25 17:01:14.819682] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:22.688 [2024-07-25 17:01:14.962740] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:22.688 [2024-07-25 17:01:14.988598] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:22.688 [2024-07-25 17:01:15.008116] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:22.688 [2024-07-25 17:01:15.033479] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:22.947 [2024-07-25 17:01:15.170639] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:22.947 [2024-07-25 17:01:15.222321] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:22.947 [2024-07-25 17:01:15.267870] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:22.947 [2024-07-25 17:01:15.279545] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.206 [2024-07-25 17:01:15.471993] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.206 [2024-07-25 17:01:15.490758] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.206 [2024-07-25 17:01:15.617743] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.206 [2024-07-25 17:01:15.662717] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.465 [2024-07-25 17:01:15.726025] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.465 [2024-07-25 17:01:15.738396] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.465 [2024-07-25 17:01:15.899977] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.724 [2024-07-25 17:01:15.933563] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.724 [2024-07-25 17:01:15.950420] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.724 [2024-07-25 17:01:15.982532] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.724 [2024-07-25 17:01:16.087703] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.984 [2024-07-25 17:01:16.219683] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.984 [2024-07-25 17:01:16.252554] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.984 [2024-07-25 17:01:16.309731] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.984 [2024-07-25 17:01:16.398612] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.984 [2024-07-25 17:01:16.421614] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:23.984 [2024-07-25 17:01:16.451653] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:24.290 [2024-07-25 17:01:16.622987] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:24.290 [2024-07-25 17:01:16.632695] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:24.290 [2024-07-25 17:01:16.674968] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:24.290 00:10:24.290 job0: (groupid=0, jobs=1): err= 0: pid=64721: Thu Jul 25 17:01:16 2024 00:10:24.290 read: IOPS=971, BW=121MiB/s (127MB/s)(1200MiB/9886msec) 00:10:24.290 slat (usec): min=5, max=2419, avg=23.47, stdev=53.54 00:10:24.290 clat (usec): min=180, max=203938, avg=3261.22, stdev=6053.53 00:10:24.290 lat (usec): min=226, max=204118, avg=3284.70, stdev=6054.34 00:10:24.290 clat percentiles (usec): 00:10:24.290 | 1.00th=[ 750], 5.00th=[ 1057], 10.00th=[ 1270], 20.00th=[ 1876], 00:10:24.290 | 30.00th=[ 2212], 40.00th=[ 2409], 50.00th=[ 2606], 60.00th=[ 2900], 00:10:24.290 | 70.00th=[ 3359], 80.00th=[ 4178], 90.00th=[ 5538], 95.00th=[ 6718], 00:10:24.290 | 99.00th=[ 8717], 99.50th=[ 10028], 99.90th=[ 22152], 99.95th=[204473], 00:10:24.290 | 99.99th=[204473] 00:10:24.290 write: IOPS=1625, BW=203MiB/s (213MB/s)(1206MiB/5937msec); 0 zone resets 00:10:24.290 slat (usec): min=25, max=11545, avg=93.26, stdev=274.06 00:10:24.290 clat (usec): min=431, max=19240, avg=4729.85, stdev=1988.02 00:10:24.290 lat (usec): min=512, max=19350, avg=4823.11, stdev=2000.44 00:10:24.290 clat percentiles (usec): 00:10:24.290 | 1.00th=[ 1811], 5.00th=[ 2474], 10.00th=[ 2900], 20.00th=[ 3458], 00:10:24.290 | 30.00th=[ 3720], 40.00th=[ 3884], 50.00th=[ 4080], 60.00th=[ 4555], 00:10:24.290 | 70.00th=[ 5145], 80.00th=[ 5800], 90.00th=[ 7242], 95.00th=[ 8717], 00:10:24.290 | 99.00th=[11994], 99.50th=[13042], 99.90th=[16319], 99.95th=[17695], 00:10:24.290 | 99.99th=[19268] 00:10:24.290 bw ( KiB/s): min=81920, max=151040, per=15.69%, avg=122756.74, stdev=13801.96, samples=19 00:10:24.290 iops : min= 640, max= 1180, avg=958.89, stdev=107.80, samples=19 00:10:24.290 lat (usec) : 250=0.03%, 500=0.12%, 750=0.36%, 1000=1.49% 00:10:24.290 lat (msec) : 2=10.41%, 4=50.03%, 10=35.99%, 20=1.51%, 50=0.02% 00:10:24.290 lat (msec) : 250=0.04% 00:10:24.290 cpu : usr=9.22%, sys=4.28%, ctx=14550, majf=0, minf=1 00:10:24.290 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.290 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.290 issued rwts: total=9600,9648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.290 latency : target=0, window=0, percentile=100.00%, depth=8 00:10:24.290 job1: (groupid=0, jobs=1): err= 0: pid=64724: Thu Jul 25 17:01:16 2024 00:10:24.290 read: IOPS=958, BW=120MiB/s (126MB/s)(1180MiB/9845msec) 00:10:24.290 slat (usec): min=5, max=7265, avg=24.40, stdev=107.14 00:10:24.290 clat (usec): min=208, max=206677, avg=3244.81, stdev=5768.80 00:10:24.290 lat (usec): min=229, max=206696, avg=3269.21, stdev=5768.28 00:10:24.290 clat percentiles (usec): 00:10:24.290 | 1.00th=[ 791], 5.00th=[ 1106], 10.00th=[ 1401], 20.00th=[ 1926], 00:10:24.290 | 30.00th=[ 2212], 40.00th=[ 2442], 50.00th=[ 2638], 60.00th=[ 2868], 00:10:24.290 | 70.00th=[ 3326], 80.00th=[ 4113], 90.00th=[ 5407], 95.00th=[ 6652], 00:10:24.290 | 99.00th=[ 9372], 99.50th=[ 10683], 99.90th=[ 16712], 99.95th=[204473], 00:10:24.290 | 99.99th=[206570] 00:10:24.290 write: IOPS=1588, BW=199MiB/s (208MB/s)(1182MiB/5954msec); 0 zone resets 00:10:24.290 slat (usec): min=25, max=19365, avg=92.41, stdev=315.03 00:10:24.290 clat (usec): min=667, max=21772, avg=4840.88, stdev=2031.10 00:10:24.290 lat (usec): min=780, max=23874, avg=4933.30, stdev=2047.70 00:10:24.290 clat percentiles (usec): 00:10:24.290 | 1.00th=[ 1860], 5.00th=[ 2573], 10.00th=[ 2999], 20.00th=[ 3523], 00:10:24.290 | 30.00th=[ 3752], 40.00th=[ 3916], 50.00th=[ 4178], 60.00th=[ 4686], 00:10:24.290 | 70.00th=[ 5211], 80.00th=[ 6063], 90.00th=[ 7504], 95.00th=[ 8717], 00:10:24.290 | 99.00th=[12387], 99.50th=[13173], 99.90th=[16712], 99.95th=[20841], 00:10:24.290 | 99.99th=[21890] 00:10:24.290 bw ( KiB/s): min=65792, max=146906, per=15.51%, avg=121289.21, stdev=16709.84, samples=19 00:10:24.290 iops : min= 514, max= 1147, avg=947.42, stdev=130.51, samples=19 00:10:24.290 lat (usec) : 250=0.02%, 500=0.05%, 750=0.33%, 1000=1.25% 00:10:24.290 lat (msec) : 2=10.15%, 4=49.77%, 10=36.80%, 20=1.56%, 50=0.04% 00:10:24.290 lat (msec) : 250=0.04% 00:10:24.290 cpu : usr=8.70%, sys=4.11%, ctx=13997, majf=0, minf=1 00:10:24.290 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.290 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.290 issued rwts: total=9440,9456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.290 latency : target=0, window=0, percentile=100.00%, depth=8 00:10:24.290 job2: (groupid=0, jobs=1): err= 0: pid=64727: Thu Jul 25 17:01:16 2024 00:10:24.290 read: IOPS=872, BW=109MiB/s (114MB/s)(1080MiB/9906msec) 00:10:24.290 slat (usec): min=5, max=3309, avg=20.30, stdev=61.34 00:10:24.290 clat (usec): min=171, max=211437, avg=4012.96, stdev=12097.41 00:10:24.290 lat (usec): min=248, max=211457, avg=4033.26, stdev=12096.89 00:10:24.291 clat percentiles (usec): 00:10:24.291 | 1.00th=[ 775], 5.00th=[ 1123], 10.00th=[ 1319], 20.00th=[ 1958], 00:10:24.291 | 30.00th=[ 2343], 40.00th=[ 2606], 50.00th=[ 2835], 60.00th=[ 3130], 00:10:24.291 | 70.00th=[ 3621], 80.00th=[ 4555], 90.00th=[ 6063], 95.00th=[ 7504], 00:10:24.291 | 99.00th=[ 9634], 99.50th=[ 13960], 99.90th=[210764], 99.95th=[210764], 00:10:24.291 | 99.99th=[210764] 00:10:24.291 write: IOPS=1574, BW=197MiB/s (206MB/s)(1080MiB/5489msec); 0 zone resets 00:10:24.291 slat (usec): min=26, max=8482, avg=87.32, stdev=258.28 00:10:24.291 clat (usec): min=333, max=23812, avg=4895.05, stdev=1967.49 00:10:24.291 lat (usec): min=614, max=23894, avg=4982.37, stdev=1976.12 00:10:24.291 clat percentiles (usec): 00:10:24.291 | 1.00th=[ 1745], 5.00th=[ 2573], 10.00th=[ 2999], 20.00th=[ 3589], 00:10:24.291 | 30.00th=[ 3851], 40.00th=[ 4015], 50.00th=[ 4293], 60.00th=[ 4752], 00:10:24.291 | 70.00th=[ 5342], 80.00th=[ 6259], 90.00th=[ 7570], 95.00th=[ 8717], 00:10:24.291 | 99.00th=[11338], 99.50th=[12256], 99.90th=[15270], 99.95th=[17171], 00:10:24.291 | 99.99th=[23725] 00:10:24.291 bw ( KiB/s): min=62720, max=140007, per=14.11%, avg=110381.55, stdev=24925.94, samples=20 00:10:24.291 iops : min= 490, max= 1093, avg=862.15, stdev=194.75, samples=20 00:10:24.291 lat (usec) : 250=0.02%, 500=0.07%, 750=0.42%, 1000=1.03% 00:10:24.291 lat (msec) : 2=9.81%, 4=45.80%, 10=41.23%, 20=1.44%, 50=0.01% 00:10:24.291 lat (msec) : 250=0.17% 00:10:24.291 cpu : usr=7.49%, sys=3.27%, ctx=13154, majf=0, minf=1 00:10:24.291 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.291 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.291 issued rwts: total=8640,8640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.291 latency : target=0, window=0, percentile=100.00%, depth=8 00:10:24.291 job3: (groupid=0, jobs=1): err= 0: pid=64728: Thu Jul 25 17:01:16 2024 00:10:24.291 read: IOPS=876, BW=110MiB/s (115MB/s)(1080MiB/9857msec) 00:10:24.291 slat (usec): min=5, max=1644, avg=21.23, stdev=45.33 00:10:24.291 clat (usec): min=118, max=212831, avg=3948.74, stdev=11885.68 00:10:24.291 lat (usec): min=344, max=212839, avg=3969.98, stdev=11885.63 00:10:24.291 clat percentiles (usec): 00:10:24.291 | 1.00th=[ 807], 5.00th=[ 1106], 10.00th=[ 1319], 20.00th=[ 1942], 00:10:24.291 | 30.00th=[ 2311], 40.00th=[ 2573], 50.00th=[ 2868], 60.00th=[ 3228], 00:10:24.291 | 70.00th=[ 3687], 80.00th=[ 4424], 90.00th=[ 5866], 95.00th=[ 6980], 00:10:24.291 | 99.00th=[ 9503], 99.50th=[ 14877], 99.90th=[210764], 99.95th=[212861], 00:10:24.291 | 99.99th=[212861] 00:10:24.291 write: IOPS=1568, BW=196MiB/s (206MB/s)(1080MiB/5508msec); 0 zone resets 00:10:24.291 slat (usec): min=24, max=5996, avg=83.25, stdev=198.82 00:10:24.291 clat (usec): min=217, max=20438, avg=4923.92, stdev=2022.70 00:10:24.291 lat (usec): min=705, max=20471, avg=5007.17, stdev=2026.97 00:10:24.291 clat percentiles (usec): 00:10:24.291 | 1.00th=[ 1909], 5.00th=[ 2671], 10.00th=[ 2999], 20.00th=[ 3490], 00:10:24.291 | 30.00th=[ 3818], 40.00th=[ 4015], 50.00th=[ 4293], 60.00th=[ 4752], 00:10:24.291 | 70.00th=[ 5407], 80.00th=[ 6325], 90.00th=[ 7570], 95.00th=[ 8717], 00:10:24.291 | 99.00th=[11863], 99.50th=[13566], 99.90th=[17957], 99.95th=[17957], 00:10:24.291 | 99.99th=[20317] 00:10:24.291 bw ( KiB/s): min=58368, max=138687, per=14.11%, avg=110375.45, stdev=22592.33, samples=20 00:10:24.291 iops : min= 456, max= 1083, avg=862.10, stdev=176.44, samples=20 00:10:24.291 lat (usec) : 250=0.02%, 500=0.06%, 750=0.31%, 1000=1.08% 00:10:24.291 lat (msec) : 2=9.96%, 4=45.61%, 10=41.34%, 20=1.46%, 50=0.01% 00:10:24.291 lat (msec) : 250=0.16% 00:10:24.291 cpu : usr=7.35%, sys=3.74%, ctx=12958, majf=0, minf=1 00:10:24.291 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.291 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.291 issued rwts: total=8640,8640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.291 latency : target=0, window=0, percentile=100.00%, depth=8 00:10:24.291 00:10:24.291 Run status group 0 (all jobs): 00:10:24.291 READ: bw=458MiB/s (481MB/s), 109MiB/s-121MiB/s (114MB/s-127MB/s), io=4540MiB (4761MB), run=9845-9906msec 00:10:24.291 WRITE: bw=764MiB/s (801MB/s), 196MiB/s-203MiB/s (206MB/s-213MB/s), io=4548MiB (4769MB), run=5489-5954msec 00:10:24.291 00:10:24.291 Disk stats (read/write): 00:10:24.291 sdb: ios=11095/9543, merge=0/0, ticks=33542/42091, in_queue=75633, util=97.52% 00:10:24.291 sdc: ios=10934/9361, merge=0/0, ticks=30841/42203, in_queue=73044, util=97.22% 00:10:24.291 sda: ios=10186/8640, merge=0/0, ticks=34631/39364, in_queue=73996, util=97.68% 00:10:24.291 sdd: ios=10268/8640, merge=0/0, ticks=32721/39554, in_queue=72276, util=97.40% 00:10:24.291 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@78 -- # timing_exit fio 00:10:24.291 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:24.291 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:24.291 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@80 -- # rm -f ./local-job0-0-verify.state 00:10:24.291 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:10:24.291 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@83 -- # rm -f 00:10:24.550 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@84 -- # iscsicleanup 00:10:24.550 Cleaning up iSCSI connection 00:10:24.550 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:10:24.550 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:10:24.550 Logging out of session [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:10:24.550 Logging out of session [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:24.550 Logout of [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:10:24.550 Logout of [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:24.550 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:10:24.550 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@985 -- # rm -rf 00:10:24.550 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@85 -- # killprocess 64470 00:10:24.550 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@950 -- # '[' -z 64470 ']' 00:10:24.550 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # kill -0 64470 00:10:24.550 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@955 -- # uname 00:10:24.550 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.550 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64470 00:10:24.550 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:24.550 killing process with pid 64470 00:10:24.550 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:24.550 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64470' 00:10:24.550 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@969 -- # kill 64470 00:10:24.550 17:01:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@974 -- # wait 64470 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@86 -- # iscsitestfini 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:10:25.117 ************************************ 00:10:25.117 END TEST iscsi_tgt_iscsi_lvol 00:10:25.117 ************************************ 00:10:25.117 00:10:25.117 real 0m17.132s 00:10:25.117 user 1m5.040s 00:10:25.117 sys 0m7.872s 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:25.117 17:01:17 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@37 -- # run_test iscsi_tgt_fio /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:10:25.117 17:01:17 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:25.117 17:01:17 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.117 17:01:17 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:10:25.117 ************************************ 00:10:25.117 START TEST iscsi_tgt_fio 00:10:25.117 ************************************ 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:10:25.117 * Looking for test storage... 00:10:25.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:10:25.117 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@11 -- # iscsitestinit 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@48 -- # '[' -z 10.0.0.1 ']' 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@53 -- # '[' -z 10.0.0.2 ']' 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@58 -- # MALLOC_BDEV_SIZE=64 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@59 -- # MALLOC_BLOCK_SIZE=4096 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@60 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@61 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@63 -- # timing_enter start_iscsi_tgt 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@66 -- # pid=65918 00:10:25.118 Process pid: 65918 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@67 -- # echo 'Process pid: 65918' 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@69 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@71 -- # waitforlisten 65918 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@65 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@831 -- # '[' -z 65918 ']' 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.118 17:01:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:10:25.376 [2024-07-25 17:01:17.645003] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:25.376 [2024-07-25 17:01:17.645539] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65918 ] 00:10:25.376 [2024-07-25 17:01:17.787268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.635 [2024-07-25 17:01:17.886283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.202 17:01:18 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:26.202 17:01:18 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@864 -- # return 0 00:10:26.202 17:01:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:10:26.460 iscsi_tgt is listening. Running tests... 00:10:26.460 17:01:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@75 -- # echo 'iscsi_tgt is listening. Running tests...' 00:10:26.460 17:01:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@77 -- # timing_exit start_iscsi_tgt 00:10:26.460 17:01:18 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:26.460 17:01:18 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:10:26.718 17:01:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:10:26.718 17:01:19 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:10:26.977 17:01:19 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:10:27.236 17:01:19 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # malloc_bdevs='Malloc0 ' 00:10:27.236 17:01:19 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:10:27.494 17:01:19 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # malloc_bdevs+=Malloc1 00:10:27.494 17:01:19 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:27.494 17:01:19 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 1024 512 00:10:28.062 17:01:20 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # bdev=Malloc2 00:10:28.062 17:01:20 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias 'raid0:0 Malloc2:1' 1:2 64 -d 00:10:28.321 17:01:20 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@91 -- # sleep 1 00:10:29.269 17:01:21 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@93 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:10:29.269 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:10:29.269 17:01:21 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@94 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:10:29.269 [2024-07-25 17:01:21.708671] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:29.269 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:10:29.269 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:10:29.269 17:01:21 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@95 -- # waitforiscsidevices 2 00:10:29.269 17:01:21 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@116 -- # local num=2 00:10:29.269 17:01:21 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:29.269 17:01:21 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:29.269 17:01:21 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:29.269 [2024-07-25 17:01:21.722454] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:29.269 17:01:21 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:29.269 17:01:21 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # n=2 00:10:29.269 17:01:21 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@120 -- # '[' 2 -ne 2 ']' 00:10:29.269 17:01:21 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@123 -- # return 0 00:10:29.269 17:01:21 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@97 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; delete_tmp_files; exit 1' SIGINT SIGTERM EXIT 00:10:29.269 17:01:21 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:10:29.528 [global] 00:10:29.528 thread=1 00:10:29.528 invalidate=1 00:10:29.528 rw=randrw 00:10:29.528 time_based=1 00:10:29.528 runtime=1 00:10:29.528 ioengine=libaio 00:10:29.528 direct=1 00:10:29.528 bs=4096 00:10:29.528 iodepth=1 00:10:29.528 norandommap=0 00:10:29.528 numjobs=1 00:10:29.528 00:10:29.528 verify_dump=1 00:10:29.528 verify_backlog=512 00:10:29.528 verify_state_save=0 00:10:29.528 do_verify=1 00:10:29.528 verify=crc32c-intel 00:10:29.528 [job0] 00:10:29.528 filename=/dev/sda 00:10:29.528 [job1] 00:10:29.528 filename=/dev/sdb 00:10:29.528 queue_depth set to 113 (sda) 00:10:29.528 queue_depth set to 113 (sdb) 00:10:29.528 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.528 job1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.528 fio-3.35 00:10:29.528 Starting 2 threads 00:10:29.528 [2024-07-25 17:01:21.987204] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:29.528 [2024-07-25 17:01:21.991546] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:30.908 [2024-07-25 17:01:23.101389] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:30.908 [2024-07-25 17:01:23.105554] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:30.908 00:10:30.908 job0: (groupid=0, jobs=1): err= 0: pid=66051: Thu Jul 25 17:01:23 2024 00:10:30.908 read: IOPS=7181, BW=28.1MiB/s (29.4MB/s)(28.1MiB/1000msec) 00:10:30.908 slat (usec): min=2, max=267, avg= 5.65, stdev= 4.34 00:10:30.908 clat (usec): min=2, max=2788, avg=85.09, stdev=39.68 00:10:30.908 lat (usec): min=57, max=2795, avg=90.75, stdev=39.97 00:10:30.908 clat percentiles (usec): 00:10:30.908 | 1.00th=[ 62], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 79], 00:10:30.908 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 85], 00:10:30.908 | 70.00th=[ 87], 80.00th=[ 90], 90.00th=[ 95], 95.00th=[ 99], 00:10:30.908 | 99.00th=[ 116], 99.50th=[ 143], 99.90th=[ 490], 99.95th=[ 660], 00:10:30.908 | 99.99th=[ 2802] 00:10:30.908 bw ( KiB/s): min=14896, max=14896, per=25.90%, avg=14896.00, stdev= 0.00, samples=1 00:10:30.908 iops : min= 3724, max= 3724, avg=3724.00, stdev= 0.00, samples=1 00:10:30.908 write: IOPS=3708, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1000msec); 0 zone resets 00:10:30.908 slat (nsec): min=3638, max=47838, avg=6679.51, stdev=2184.61 00:10:30.908 clat (usec): min=50, max=516, avg=84.94, stdev=12.98 00:10:30.908 lat (usec): min=56, max=532, avg=91.62, stdev=13.56 00:10:30.908 clat percentiles (usec): 00:10:30.908 | 1.00th=[ 54], 5.00th=[ 74], 10.00th=[ 78], 20.00th=[ 80], 00:10:30.908 | 30.00th=[ 82], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 85], 00:10:30.908 | 70.00th=[ 87], 80.00th=[ 91], 90.00th=[ 96], 95.00th=[ 101], 00:10:30.908 | 99.00th=[ 118], 99.50th=[ 125], 99.90th=[ 194], 99.95th=[ 223], 00:10:30.908 | 99.99th=[ 519] 00:10:30.908 bw ( KiB/s): min=15568, max=15568, per=52.16%, avg=15568.00, stdev= 0.00, samples=1 00:10:30.908 iops : min= 3892, max= 3892, avg=3892.00, stdev= 0.00, samples=1 00:10:30.908 lat (usec) : 4=0.01%, 10=0.01%, 100=95.06%, 250=4.72%, 500=0.14% 00:10:30.908 lat (usec) : 750=0.05%, 1000=0.01% 00:10:30.908 lat (msec) : 4=0.01% 00:10:30.908 cpu : usr=3.30%, sys=9.20%, ctx=10887, majf=0, minf=7 00:10:30.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.908 issued rwts: total=7181,3708,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.908 job1: (groupid=0, jobs=1): err= 0: pid=66054: Thu Jul 25 17:01:23 2024 00:10:30.908 read: IOPS=7206, BW=28.2MiB/s (29.5MB/s)(28.2MiB/1001msec) 00:10:30.908 slat (nsec): min=2707, max=62092, avg=5405.81, stdev=2492.76 00:10:30.908 clat (usec): min=41, max=300, avg=82.39, stdev=11.52 00:10:30.908 lat (usec): min=46, max=362, avg=87.80, stdev=11.97 00:10:30.908 clat percentiles (usec): 00:10:30.908 | 1.00th=[ 46], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 76], 00:10:30.908 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 84], 60.00th=[ 85], 00:10:30.908 | 70.00th=[ 86], 80.00th=[ 89], 90.00th=[ 94], 95.00th=[ 98], 00:10:30.908 | 99.00th=[ 117], 99.50th=[ 125], 99.90th=[ 149], 99.95th=[ 167], 00:10:30.908 | 99.99th=[ 302] 00:10:30.908 bw ( KiB/s): min=15424, max=15424, per=26.81%, avg=15424.00, stdev= 0.00, samples=1 00:10:30.908 iops : min= 3856, max= 3856, avg=3856.00, stdev= 0.00, samples=1 00:10:30.908 write: IOPS=3757, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1001msec); 0 zone resets 00:10:30.908 slat (nsec): min=3472, max=40769, avg=6375.02, stdev=2122.74 00:10:30.908 clat (usec): min=48, max=160, avg=88.69, stdev=11.83 00:10:30.908 lat (usec): min=54, max=164, avg=95.07, stdev=12.17 00:10:30.908 clat percentiles (usec): 00:10:30.908 | 1.00th=[ 53], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 82], 00:10:30.908 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 89], 60.00th=[ 90], 00:10:30.908 | 70.00th=[ 92], 80.00th=[ 94], 90.00th=[ 101], 95.00th=[ 111], 00:10:30.908 | 99.00th=[ 135], 99.50th=[ 139], 99.90th=[ 151], 99.95th=[ 157], 00:10:30.908 | 99.99th=[ 161] 00:10:30.908 bw ( KiB/s): min=16080, max=16080, per=53.88%, avg=16080.00, stdev= 0.00, samples=1 00:10:30.908 iops : min= 4020, max= 4020, avg=4020.00, stdev= 0.00, samples=1 00:10:30.908 lat (usec) : 50=1.42%, 100=92.30%, 250=6.26%, 500=0.02% 00:10:30.908 cpu : usr=3.60%, sys=9.00%, ctx=10975, majf=0, minf=7 00:10:30.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.908 issued rwts: total=7214,3761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.908 00:10:30.908 Run status group 0 (all jobs): 00:10:30.908 READ: bw=56.2MiB/s (58.9MB/s), 28.1MiB/s-28.2MiB/s (29.4MB/s-29.5MB/s), io=56.2MiB (59.0MB), run=1000-1001msec 00:10:30.908 WRITE: bw=29.1MiB/s (30.6MB/s), 14.5MiB/s-14.7MiB/s (15.2MB/s-15.4MB/s), io=29.2MiB (30.6MB), run=1000-1001msec 00:10:30.908 00:10:30.908 Disk stats (read/write): 00:10:30.908 sda: ios=6338/3331, merge=0/0, ticks=527/277, in_queue=805, util=90.19% 00:10:30.908 sdb: ios=6365/3409, merge=0/0, ticks=513/295, in_queue=809, util=90.65% 00:10:30.908 17:01:23 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:10:30.908 [global] 00:10:30.908 thread=1 00:10:30.908 invalidate=1 00:10:30.908 rw=randrw 00:10:30.908 time_based=1 00:10:30.908 runtime=1 00:10:30.908 ioengine=libaio 00:10:30.908 direct=1 00:10:30.908 bs=131072 00:10:30.908 iodepth=32 00:10:30.908 norandommap=0 00:10:30.908 numjobs=1 00:10:30.908 00:10:30.908 verify_dump=1 00:10:30.908 verify_backlog=512 00:10:30.908 verify_state_save=0 00:10:30.908 do_verify=1 00:10:30.908 verify=crc32c-intel 00:10:30.908 [job0] 00:10:30.908 filename=/dev/sda 00:10:30.908 [job1] 00:10:30.908 filename=/dev/sdb 00:10:30.908 queue_depth set to 113 (sda) 00:10:30.908 queue_depth set to 113 (sdb) 00:10:30.908 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:30.908 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:10:30.908 fio-3.35 00:10:30.908 Starting 2 threads 00:10:30.908 [2024-07-25 17:01:23.370144] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:30.908 [2024-07-25 17:01:23.374220] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:31.842 [2024-07-25 17:01:24.270348] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:32.101 [2024-07-25 17:01:24.499078] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:32.101 [2024-07-25 17:01:24.502906] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:32.101 00:10:32.101 job0: (groupid=0, jobs=1): err= 0: pid=66122: Thu Jul 25 17:01:24 2024 00:10:32.101 read: IOPS=2190, BW=274MiB/s (287MB/s)(277MiB/1010msec) 00:10:32.101 slat (usec): min=9, max=740, avg=26.46, stdev=17.75 00:10:32.101 clat (usec): min=781, max=30950, avg=5406.11, stdev=4510.95 00:10:32.101 lat (usec): min=810, max=30974, avg=5432.57, stdev=4510.44 00:10:32.101 clat percentiles (usec): 00:10:32.101 | 1.00th=[ 922], 5.00th=[ 1074], 10.00th=[ 1205], 20.00th=[ 1565], 00:10:32.101 | 30.00th=[ 2704], 40.00th=[ 4555], 50.00th=[ 4948], 60.00th=[ 5145], 00:10:32.101 | 70.00th=[ 5538], 80.00th=[ 7504], 90.00th=[10290], 95.00th=[13566], 00:10:32.101 | 99.00th=[25560], 99.50th=[27919], 99.90th=[30540], 99.95th=[30802], 00:10:32.101 | 99.99th=[31065] 00:10:32.101 bw ( KiB/s): min=105516, max=174848, per=31.87%, avg=140182.00, stdev=49025.13, samples=2 00:10:32.101 iops : min= 824, max= 1366, avg=1095.00, stdev=383.25, samples=2 00:10:32.101 write: IOPS=1359, BW=170MiB/s (178MB/s)(143MiB/842msec); 0 zone resets 00:10:32.101 slat (usec): min=44, max=537, avg=103.45, stdev=20.27 00:10:32.101 clat (usec): min=3443, max=31979, avg=17399.81, stdev=5510.20 00:10:32.101 lat (usec): min=3547, max=32074, avg=17503.27, stdev=5510.57 00:10:32.101 clat percentiles (usec): 00:10:32.101 | 1.00th=[ 8586], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[13698], 00:10:32.101 | 30.00th=[14746], 40.00th=[15401], 50.00th=[16188], 60.00th=[16909], 00:10:32.101 | 70.00th=[19006], 80.00th=[21627], 90.00th=[26346], 95.00th=[28705], 00:10:32.101 | 99.00th=[30802], 99.50th=[30802], 99.90th=[31589], 99.95th=[31851], 00:10:32.101 | 99.99th=[31851] 00:10:32.101 bw ( KiB/s): min=103984, max=181504, per=45.55%, avg=142744.00, stdev=54814.92, samples=2 00:10:32.101 iops : min= 812, max= 1418, avg=1115.00, stdev=428.51, samples=2 00:10:32.101 lat (usec) : 1000=1.73% 00:10:32.101 lat (msec) : 2=15.37%, 4=6.37%, 10=37.56%, 20=28.75%, 50=10.22% 00:10:32.101 cpu : usr=17.33%, sys=7.92%, ctx=2137, majf=0, minf=19 00:10:32.101 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=97.2%, >=64=0.0% 00:10:32.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.101 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:32.101 issued rwts: total=2212,1145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.101 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:32.101 job1: (groupid=0, jobs=1): err= 0: pid=66123: Thu Jul 25 17:01:24 2024 00:10:32.101 read: IOPS=1246, BW=156MiB/s (163MB/s)(157MiB/1010msec) 00:10:32.101 slat (usec): min=5, max=165, avg=23.79, stdev=11.61 00:10:32.101 clat (usec): min=759, max=34366, avg=5762.09, stdev=5988.50 00:10:32.101 lat (usec): min=788, max=34409, avg=5785.88, stdev=5986.41 00:10:32.101 clat percentiles (usec): 00:10:32.101 | 1.00th=[ 873], 5.00th=[ 996], 10.00th=[ 1057], 20.00th=[ 1188], 00:10:32.101 | 30.00th=[ 1385], 40.00th=[ 1778], 50.00th=[ 3589], 60.00th=[ 5604], 00:10:32.101 | 70.00th=[ 7177], 80.00th=[ 9765], 90.00th=[12911], 95.00th=[16057], 00:10:32.101 | 99.00th=[30540], 99.50th=[31851], 99.90th=[34341], 99.95th=[34341], 00:10:32.101 | 99.99th=[34341] 00:10:32.101 bw ( KiB/s): min=154570, max=166912, per=36.54%, avg=160741.00, stdev=8727.11, samples=2 00:10:32.101 iops : min= 1207, max= 1304, avg=1255.50, stdev=68.59, samples=2 00:10:32.101 write: IOPS=1314, BW=164MiB/s (172MB/s)(166MiB/1010msec); 0 zone resets 00:10:32.101 slat (usec): min=25, max=364, avg=93.27, stdev=31.78 00:10:32.101 clat (usec): min=7964, max=37652, avg=18717.60, stdev=6314.99 00:10:32.101 lat (usec): min=7997, max=37772, avg=18810.87, stdev=6324.02 00:10:32.101 clat percentiles (usec): 00:10:32.101 | 1.00th=[ 9110], 5.00th=[ 9896], 10.00th=[11600], 20.00th=[14353], 00:10:32.101 | 30.00th=[15139], 40.00th=[15926], 50.00th=[16712], 60.00th=[17957], 00:10:32.101 | 70.00th=[20055], 80.00th=[25560], 90.00th=[28705], 95.00th=[31065], 00:10:32.101 | 99.00th=[34341], 99.50th=[35390], 99.90th=[36963], 99.95th=[37487], 00:10:32.101 | 99.99th=[37487] 00:10:32.101 bw ( KiB/s): min=154826, max=177408, per=53.00%, avg=166117.00, stdev=15967.89, samples=2 00:10:32.101 iops : min= 1209, max= 1386, avg=1297.50, stdev=125.16, samples=2 00:10:32.101 lat (usec) : 1000=2.71% 00:10:32.101 lat (msec) : 2=17.78%, 4=4.56%, 10=17.39%, 20=40.43%, 50=17.12% 00:10:32.101 cpu : usr=10.51%, sys=6.84%, ctx=1988, majf=0, minf=19 00:10:32.101 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=98.8%, >=64=0.0% 00:10:32.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:10:32.101 issued rwts: total=1259,1328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.101 latency : target=0, window=0, percentile=100.00%, depth=32 00:10:32.101 00:10:32.101 Run status group 0 (all jobs): 00:10:32.101 READ: bw=430MiB/s (450MB/s), 156MiB/s-274MiB/s (163MB/s-287MB/s), io=434MiB (455MB), run=1010-1010msec 00:10:32.101 WRITE: bw=306MiB/s (321MB/s), 164MiB/s-170MiB/s (172MB/s-178MB/s), io=309MiB (324MB), run=842-1010msec 00:10:32.101 00:10:32.101 Disk stats (read/write): 00:10:32.101 sda: ios=1772/1024, merge=0/0, ticks=9822/17631, in_queue=27453, util=89.80% 00:10:32.101 sdb: ios=1121/1134, merge=0/0, ticks=6542/20430, in_queue=26973, util=89.97% 00:10:32.101 17:01:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@101 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 524288 -d 128 -t randrw -r 1 -v 00:10:32.360 [global] 00:10:32.360 thread=1 00:10:32.360 invalidate=1 00:10:32.360 rw=randrw 00:10:32.360 time_based=1 00:10:32.360 runtime=1 00:10:32.360 ioengine=libaio 00:10:32.360 direct=1 00:10:32.360 bs=524288 00:10:32.360 iodepth=128 00:10:32.360 norandommap=0 00:10:32.360 numjobs=1 00:10:32.360 00:10:32.360 verify_dump=1 00:10:32.360 verify_backlog=512 00:10:32.360 verify_state_save=0 00:10:32.360 do_verify=1 00:10:32.360 verify=crc32c-intel 00:10:32.360 [job0] 00:10:32.360 filename=/dev/sda 00:10:32.360 [job1] 00:10:32.360 filename=/dev/sdb 00:10:32.360 queue_depth set to 113 (sda) 00:10:32.360 queue_depth set to 113 (sdb) 00:10:32.360 job0: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:10:32.360 job1: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:10:32.360 fio-3.35 00:10:32.360 Starting 2 threads 00:10:32.360 [2024-07-25 17:01:24.765160] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:32.360 [2024-07-25 17:01:24.769189] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:33.737 [2024-07-25 17:01:25.837890] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:33.996 [2024-07-25 17:01:26.218698] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:33.996 00:10:33.996 job0: (groupid=0, jobs=1): err= 0: pid=66199: Thu Jul 25 17:01:26 2024 00:10:33.996 read: IOPS=298, BW=149MiB/s (156MB/s)(196MiB/1312msec) 00:10:33.996 slat (usec): min=14, max=38558, avg=1355.65, stdev=4047.65 00:10:33.996 clat (msec): min=70, max=504, avg=253.23, stdev=121.42 00:10:33.996 lat (msec): min=70, max=504, avg=254.59, stdev=121.67 00:10:33.996 clat percentiles (msec): 00:10:33.996 | 1.00th=[ 71], 5.00th=[ 123], 10.00th=[ 134], 20.00th=[ 163], 00:10:33.996 | 30.00th=[ 171], 40.00th=[ 186], 50.00th=[ 203], 60.00th=[ 218], 00:10:33.996 | 70.00th=[ 300], 80.00th=[ 372], 90.00th=[ 460], 95.00th=[ 506], 00:10:33.996 | 99.00th=[ 506], 99.50th=[ 506], 99.90th=[ 506], 99.95th=[ 506], 00:10:33.996 | 99.99th=[ 506] 00:10:33.996 bw ( KiB/s): min=90112, max=157696, per=43.43%, avg=123904.00, stdev=47789.10, samples=2 00:10:33.996 iops : min= 176, max= 308, avg=242.00, stdev=93.34, samples=2 00:10:33.996 write: IOPS=346, BW=173MiB/s (182MB/s)(135MiB/779msec); 0 zone resets 00:10:33.996 slat (usec): min=122, max=18513, avg=1306.99, stdev=2575.71 00:10:33.996 clat (msec): min=78, max=340, avg=198.68, stdev=52.35 00:10:33.996 lat (msec): min=78, max=341, avg=199.99, stdev=52.67 00:10:33.996 clat percentiles (msec): 00:10:33.996 | 1.00th=[ 99], 5.00th=[ 122], 10.00th=[ 136], 20.00th=[ 165], 00:10:33.996 | 30.00th=[ 171], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 203], 00:10:33.996 | 70.00th=[ 218], 80.00th=[ 230], 90.00th=[ 271], 95.00th=[ 313], 00:10:33.996 | 99.00th=[ 334], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 342], 00:10:33.996 | 99.99th=[ 342] 00:10:33.996 bw ( KiB/s): min=129024, max=147456, per=44.23%, avg=138240.00, stdev=13033.39, samples=2 00:10:33.996 iops : min= 252, max= 288, avg=270.00, stdev=25.46, samples=2 00:10:33.996 lat (msec) : 100=2.87%, 250=70.35%, 500=21.63%, 750=5.14% 00:10:33.996 cpu : usr=6.48%, sys=2.75%, ctx=417, majf=0, minf=5 00:10:33.996 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.7%, >=64=80.9% 00:10:33.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.996 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:10:33.996 issued rwts: total=391,270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.996 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.996 job1: (groupid=0, jobs=1): err= 0: pid=66200: Thu Jul 25 17:01:26 2024 00:10:33.996 read: IOPS=318, BW=159MiB/s (167MB/s)(170MiB/1068msec) 00:10:33.996 slat (usec): min=16, max=18955, avg=1417.36, stdev=3076.75 00:10:33.996 clat (msec): min=65, max=336, avg=167.75, stdev=62.49 00:10:33.996 lat (msec): min=67, max=336, avg=169.17, stdev=62.88 00:10:33.996 clat percentiles (msec): 00:10:33.996 | 1.00th=[ 68], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 117], 00:10:33.996 | 30.00th=[ 124], 40.00th=[ 144], 50.00th=[ 159], 60.00th=[ 169], 00:10:33.996 | 70.00th=[ 176], 80.00th=[ 197], 90.00th=[ 275], 95.00th=[ 317], 00:10:33.996 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 338], 99.95th=[ 338], 00:10:33.996 | 99.99th=[ 338] 00:10:33.996 bw ( KiB/s): min=90112, max=209920, per=52.59%, avg=150016.00, stdev=84717.05, samples=2 00:10:33.996 iops : min= 176, max= 410, avg=293.00, stdev=165.46, samples=2 00:10:33.996 write: IOPS=357, BW=179MiB/s (188MB/s)(191MiB/1068msec); 0 zone resets 00:10:33.996 slat (usec): min=106, max=15201, avg=1349.10, stdev=2434.14 00:10:33.996 clat (msec): min=67, max=354, avg=185.50, stdev=60.08 00:10:33.996 lat (msec): min=68, max=354, avg=186.85, stdev=60.11 00:10:33.996 clat percentiles (msec): 00:10:33.996 | 1.00th=[ 69], 5.00th=[ 113], 10.00th=[ 121], 20.00th=[ 140], 00:10:33.996 | 30.00th=[ 155], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 186], 00:10:33.996 | 70.00th=[ 192], 80.00th=[ 205], 90.00th=[ 292], 95.00th=[ 317], 00:10:33.996 | 99.00th=[ 342], 99.50th=[ 355], 99.90th=[ 355], 99.95th=[ 355], 00:10:33.996 | 99.99th=[ 355] 00:10:33.996 bw ( KiB/s): min=89088, max=219136, per=49.30%, avg=154112.00, stdev=91957.82, samples=2 00:10:33.996 iops : min= 174, max= 428, avg=301.00, stdev=179.61, samples=2 00:10:33.996 lat (msec) : 100=4.29%, 250=80.75%, 500=14.96% 00:10:33.996 cpu : usr=9.93%, sys=3.09%, ctx=424, majf=0, minf=7 00:10:33.996 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:10:33.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.996 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:10:33.996 issued rwts: total=340,382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.996 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.996 00:10:33.997 Run status group 0 (all jobs): 00:10:33.997 READ: bw=279MiB/s (292MB/s), 149MiB/s-159MiB/s (156MB/s-167MB/s), io=366MiB (383MB), run=1068-1312msec 00:10:33.997 WRITE: bw=305MiB/s (320MB/s), 173MiB/s-179MiB/s (182MB/s-188MB/s), io=326MiB (342MB), run=779-1068msec 00:10:33.997 00:10:33.997 Disk stats (read/write): 00:10:33.997 sda: ios=432/270, merge=0/0, ticks=30132/22799, in_queue=52931, util=84.75% 00:10:33.997 sdb: ios=285/250, merge=0/0, ticks=18839/24777, in_queue=43617, util=77.00% 00:10:33.997 17:01:26 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 1024 -t read -r 1 -n 4 00:10:33.997 [global] 00:10:33.997 thread=1 00:10:33.997 invalidate=1 00:10:33.997 rw=read 00:10:33.997 time_based=1 00:10:33.997 runtime=1 00:10:33.997 ioengine=libaio 00:10:33.997 direct=1 00:10:33.997 bs=1048576 00:10:33.997 iodepth=1024 00:10:33.997 norandommap=1 00:10:33.997 numjobs=4 00:10:33.997 00:10:33.997 [job0] 00:10:33.997 filename=/dev/sda 00:10:33.997 [job1] 00:10:33.997 filename=/dev/sdb 00:10:33.997 queue_depth set to 113 (sda) 00:10:33.997 queue_depth set to 113 (sdb) 00:10:34.255 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:10:34.255 ... 00:10:34.255 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:10:34.255 ... 00:10:34.255 fio-3.35 00:10:34.255 Starting 8 threads 00:10:49.134 00:10:49.134 job0: (groupid=0, jobs=1): err= 0: pid=66267: Thu Jul 25 17:01:41 2024 00:10:49.134 read: IOPS=2, BW=2686KiB/s (2750kB/s)(38.0MiB/14487msec) 00:10:49.134 slat (usec): min=514, max=622933, avg=39248.60, stdev=143401.93 00:10:49.134 clat (msec): min=12995, max=14485, avg=14403.05, stdev=274.49 00:10:49.134 lat (msec): min=13618, max=14486, avg=14442.29, stdev=142.74 00:10:49.134 clat percentiles (msec): 00:10:49.134 | 1.00th=[12953], 5.00th=[13624], 10.00th=[14429], 20.00th=[14429], 00:10:49.134 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:10:49.134 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:10:49.134 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:10:49.134 | 99.99th=[14429] 00:10:49.134 lat (msec) : >=2000=100.00% 00:10:49.134 cpu : usr=0.00%, sys=0.17%, ctx=32, majf=0, minf=9729 00:10:49.134 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:10:49.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.134 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:10:49.134 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.134 latency : target=0, window=0, percentile=100.00%, depth=1024 00:10:49.134 job0: (groupid=0, jobs=1): err= 0: pid=66268: Thu Jul 25 17:01:41 2024 00:10:49.134 read: IOPS=1, BW=1203KiB/s (1232kB/s)(17.0MiB/14466msec) 00:10:49.134 slat (usec): min=428, max=832633, avg=86898.81, stdev=243996.89 00:10:49.134 clat (msec): min=12988, max=14461, avg=14266.97, stdev=431.30 00:10:49.134 lat (msec): min=13610, max=14465, avg=14353.87, stdev=279.72 00:10:49.134 clat percentiles (msec): 00:10:49.134 | 1.00th=[12953], 5.00th=[12953], 10.00th=[13624], 20.00th=[14429], 00:10:49.134 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:10:49.134 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:10:49.134 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:10:49.134 | 99.99th=[14429] 00:10:49.134 lat (msec) : >=2000=100.00% 00:10:49.134 cpu : usr=0.00%, sys=0.12%, ctx=30, majf=0, minf=4353 00:10:49.134 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:10:49.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.134 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:10:49.134 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.134 latency : target=0, window=0, percentile=100.00%, depth=1024 00:10:49.134 job0: (groupid=0, jobs=1): err= 0: pid=66269: Thu Jul 25 17:01:41 2024 00:10:49.134 read: IOPS=0, BW=709KiB/s (726kB/s)(10.0MiB/14453msec) 00:10:49.134 slat (usec): min=609, max=623597, avg=146191.40, stdev=259581.16 00:10:49.134 clat (msec): min=12990, max=14452, avg=14198.77, stdev=499.06 00:10:49.134 lat (msec): min=13614, max=14452, avg=14344.96, stdev=265.29 00:10:49.134 clat percentiles (msec): 00:10:49.134 | 1.00th=[12953], 5.00th=[12953], 10.00th=[12953], 20.00th=[13624], 00:10:49.134 | 30.00th=[14295], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:10:49.134 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:10:49.134 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:10:49.134 | 99.99th=[14429] 00:10:49.134 lat (msec) : >=2000=100.00% 00:10:49.134 cpu : usr=0.00%, sys=0.04%, ctx=22, majf=0, minf=2561 00:10:49.134 IO depths : 1=10.0%, 2=20.0%, 4=40.0%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.134 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.134 issued rwts: total=10,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.135 latency : target=0, window=0, percentile=100.00%, depth=1024 00:10:49.135 job0: (groupid=0, jobs=1): err= 0: pid=66270: Thu Jul 25 17:01:41 2024 00:10:49.135 read: IOPS=1, BW=1274KiB/s (1305kB/s)(18.0MiB/14463msec) 00:10:49.135 slat (usec): min=806, max=3084.5k, avg=218938.73, stdev=730948.27 00:10:49.135 clat (msec): min=10521, max=14459, avg=14173.41, stdev=933.79 00:10:49.135 lat (msec): min=13605, max=14461, avg=14392.35, stdev=203.47 00:10:49.135 clat percentiles (msec): 00:10:49.135 | 1.00th=[10537], 5.00th=[10537], 10.00th=[13624], 20.00th=[14429], 00:10:49.135 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:10:49.135 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:10:49.135 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:10:49.135 | 99.99th=[14429] 00:10:49.135 lat (msec) : >=2000=100.00% 00:10:49.135 cpu : usr=0.00%, sys=0.11%, ctx=25, majf=0, minf=4609 00:10:49.135 IO depths : 1=5.6%, 2=11.1%, 4=22.2%, 8=44.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:10:49.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.135 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:10:49.135 issued rwts: total=18,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.135 latency : target=0, window=0, percentile=100.00%, depth=1024 00:10:49.135 job1: (groupid=0, jobs=1): err= 0: pid=66271: Thu Jul 25 17:01:41 2024 00:10:49.135 read: IOPS=1, BW=1908KiB/s (1954kB/s)(27.0MiB/14491msec) 00:10:49.135 slat (usec): min=714, max=1244.9k, avg=55043.29, stdev=241066.18 00:10:49.135 clat (msec): min=13004, max=14488, avg=14410.54, stdev=284.48 00:10:49.135 lat (msec): min=14249, max=14490, avg=14465.58, stdev=44.60 00:10:49.135 clat percentiles (msec): 00:10:49.135 | 1.00th=[12953], 5.00th=[14295], 10.00th=[14429], 20.00th=[14429], 00:10:49.135 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:10:49.135 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:10:49.135 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:10:49.135 | 99.99th=[14429] 00:10:49.135 lat (msec) : >=2000=100.00% 00:10:49.135 cpu : usr=0.00%, sys=0.16%, ctx=34, majf=0, minf=6913 00:10:49.135 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:10:49.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.135 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:10:49.135 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.135 latency : target=0, window=0, percentile=100.00%, depth=1024 00:10:49.135 job1: (groupid=0, jobs=1): err= 0: pid=66272: Thu Jul 25 17:01:41 2024 00:10:49.135 read: IOPS=1, BW=1979KiB/s (2026kB/s)(28.0MiB/14490msec) 00:10:49.135 slat (usec): min=515, max=1244.5k, avg=53012.75, stdev=236723.71 00:10:49.135 clat (msec): min=13005, max=14488, avg=14405.29, stdev=280.73 00:10:49.135 lat (msec): min=14249, max=14489, avg=14458.30, stdev=59.73 00:10:49.135 clat percentiles (msec): 00:10:49.135 | 1.00th=[12953], 5.00th=[14295], 10.00th=[14295], 20.00th=[14429], 00:10:49.135 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:10:49.135 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:10:49.135 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:10:49.135 | 99.99th=[14429] 00:10:49.135 lat (msec) : >=2000=100.00% 00:10:49.135 cpu : usr=0.00%, sys=0.17%, ctx=36, majf=0, minf=7169 00:10:49.135 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:10:49.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.135 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:10:49.135 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.135 latency : target=0, window=0, percentile=100.00%, depth=1024 00:10:49.135 job1: (groupid=0, jobs=1): err= 0: pid=66273: Thu Jul 25 17:01:41 2024 00:10:49.135 read: IOPS=0, BW=355KiB/s (363kB/s)(5120KiB/14441msec) 00:10:49.135 slat (usec): min=1073, max=623039, avg=290917.00, stdev=314174.09 00:10:49.135 clat (msec): min=12986, max=14439, avg=13940.89, stdev=632.89 00:10:49.135 lat (msec): min=13609, max=14440, avg=14231.81, stdev=359.60 00:10:49.135 clat percentiles (msec): 00:10:49.135 | 1.00th=[12953], 5.00th=[12953], 10.00th=[12953], 20.00th=[12953], 00:10:49.135 | 30.00th=[13624], 40.00th=[13624], 50.00th=[14295], 60.00th=[14295], 00:10:49.135 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:10:49.135 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:10:49.135 | 99.99th=[14429] 00:10:49.135 lat (msec) : >=2000=100.00% 00:10:49.135 cpu : usr=0.00%, sys=0.03%, ctx=12, majf=0, minf=1281 00:10:49.135 IO depths : 1=20.0%, 2=40.0%, 4=40.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.135 complete : 0=0.0%, 4=0.0%, 8=100.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.135 issued rwts: total=5,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.135 latency : target=0, window=0, percentile=100.00%, depth=1024 00:10:49.135 job1: (groupid=0, jobs=1): err= 0: pid=66274: Thu Jul 25 17:01:41 2024 00:10:49.135 read: IOPS=1, BW=1061KiB/s (1087kB/s)(15.0MiB/14476msec) 00:10:49.135 slat (usec): min=831, max=828865, avg=98042.46, stdev=257883.14 00:10:49.135 clat (msec): min=13004, max=14472, avg=14310.79, stdev=420.63 00:10:49.135 lat (msec): min=13627, max=14475, avg=14408.83, stdev=216.16 00:10:49.135 clat percentiles (msec): 00:10:49.135 | 1.00th=[12953], 5.00th=[12953], 10.00th=[13624], 20.00th=[14429], 00:10:49.135 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:10:49.135 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:10:49.135 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:10:49.135 | 99.99th=[14429] 00:10:49.135 lat (msec) : >=2000=100.00% 00:10:49.135 cpu : usr=0.00%, sys=0.10%, ctx=27, majf=0, minf=3841 00:10:49.135 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.135 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.135 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.135 latency : target=0, window=0, percentile=100.00%, depth=1024 00:10:49.135 00:10:49.135 Run status group 0 (all jobs): 00:10:49.135 READ: bw=10.9MiB/s (11.4MB/s), 355KiB/s-2686KiB/s (363kB/s-2750kB/s), io=158MiB (166MB), run=14441-14491msec 00:10:49.135 00:10:49.135 Disk stats (read/write): 00:10:49.135 sda: ios=53/0, merge=0/0, ticks=280323/0, in_queue=280322, util=98.96% 00:10:49.135 sdb: ios=37/0, merge=0/0, ticks=177966/0, in_queue=177967, util=99.37% 00:10:49.135 17:01:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@104 -- # '[' 0 -eq 1 ']' 00:10:49.135 17:01:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@116 -- # fio_pid=66430 00:10:49.135 17:01:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 128 -t rw -r 10 00:10:49.135 17:01:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@118 -- # sleep 3 00:10:49.135 [global] 00:10:49.135 thread=1 00:10:49.135 invalidate=1 00:10:49.135 rw=rw 00:10:49.135 time_based=1 00:10:49.135 runtime=10 00:10:49.135 ioengine=libaio 00:10:49.135 direct=1 00:10:49.135 bs=1048576 00:10:49.135 iodepth=128 00:10:49.135 norandommap=1 00:10:49.135 numjobs=1 00:10:49.135 00:10:49.135 [job0] 00:10:49.135 filename=/dev/sda 00:10:49.135 [job1] 00:10:49.135 filename=/dev/sdb 00:10:49.135 queue_depth set to 113 (sda) 00:10:49.135 queue_depth set to 113 (sdb) 00:10:49.135 job0: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:10:49.135 job1: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:10:49.135 fio-3.35 00:10:49.135 Starting 2 threads 00:10:49.135 [2024-07-25 17:01:41.445705] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:49.135 [2024-07-25 17:01:41.450927] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:52.478 17:01:44 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:52.478 [2024-07-25 17:01:44.422403] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (raid0) received event(SPDK_BDEV_EVENT_REMOVE) 00:10:52.478 17:01:44 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:10:52.478 17:01:44 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:52.478 fio: io_u error on file /dev/sda: Input/output error: write offset=95420416, buflen=1048576 00:10:52.478 fio: io_u error on file /dev/sda: Input/output error: write offset=96468992, buflen=1048576 00:10:52.478 17:01:44 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:10:52.478 17:01:44 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:52.478 fio: io_u error on file /dev/sda: Input/output error: write offset=97517568, buflen=1048576 00:10:52.478 fio: io_u error on file /dev/sda: Input/output error: write offset=98566144, buflen=1048576 00:10:52.478 fio: io_u error on file /dev/sda: Input/output error: write offset=99614720, buflen=1048576 00:10:52.478 fio: io_u error on file /dev/sda: Input/output error: write offset=100663296, buflen=1048576 00:10:52.478 fio: io_u error on file /dev/sda: Input/output error: write offset=101711872, buflen=1048576 00:10:52.478 fio: io_u error on file /dev/sda: Input/output error: write offset=102760448, buflen=1048576 00:10:52.478 fio: io_u error on file /dev/sda: Input/output error: write offset=103809024, buflen=1048576 00:10:52.478 fio: io_u error on file /dev/sda: Input/output error: write offset=104857600, buflen=1048576 00:10:52.478 fio: io_u error on file /dev/sda: Input/output error: write offset=105906176, buflen=1048576 00:10:52.478 fio: io_u error on file /dev/sda: Input/output error: write offset=106954752, buflen=1048576 00:10:52.478 fio: io_u error on file /dev/sda: Input/output error: write offset=108003328, buflen=1048576 00:10:52.478 fio: io_u error on file /dev/sda: Input/output error: write offset=109051904, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=110100480, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=74448896, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=75497472, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=76546048, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=77594624, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=111149056, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=112197632, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=78643200, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=113246208, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=79691776, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=114294784, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=115343360, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=80740352, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=81788928, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=82837504, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=116391936, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=117440512, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=83886080, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=84934656, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=118489088, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=85983232, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=87031808, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=119537664, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=88080384, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=120586240, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=121634816, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=89128960, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=122683392, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=123731968, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=90177536, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=91226112, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=92274688, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=93323264, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=124780544, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=125829120, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=126877696, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=94371840, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=95420416, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=127926272, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=128974848, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=130023424, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=131072000, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=132120576, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=96468992, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=133169152, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=97517568, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=0, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=1048576, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=2097152, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=4194304, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=98566144, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=5242880, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=102760448, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=6291456, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=99614720, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=103809024, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=3145728, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=7340032, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=100663296, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=101711872, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=8388608, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=104857600, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=105906176, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=9437184, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=106954752, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=108003328, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=109051904, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=110100480, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=10485760, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=111149056, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=112197632, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=113246208, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=114294784, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=115343360, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=11534336, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=12582912, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=116391936, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=13631488, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=14680064, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=15728640, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=117440512, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=118489088, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=119537664, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=16777216, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=17825792, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=120586240, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=121634816, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=122683392, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=18874368, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=19922944, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=20971520, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=22020096, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=123731968, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: read offset=124780544, buflen=1048576 00:10:52.479 fio: io_u error on file /dev/sda: Input/output error: write offset=23068672, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: write offset=24117248, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: write offset=25165824, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: read offset=125829120, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: read offset=126877696, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: write offset=26214400, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: read offset=127926272, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: read offset=128974848, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: write offset=27262976, buflen=1048576 00:10:52.480 fio: pid=66458, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: read offset=130023424, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: read offset=131072000, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: write offset=28311552, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: write offset=29360128, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: read offset=132120576, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: write offset=30408704, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: write offset=31457280, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: write offset=32505856, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: read offset=133169152, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: write offset=33554432, buflen=1048576 00:10:52.480 fio: io_u error on file /dev/sda: Input/output error: write offset=34603008, buflen=1048576 00:10:52.480 17:01:44 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:52.758 [2024-07-25 17:01:45.047726] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Malloc2) received event(SPDK_BDEV_EVENT_REMOVE) 00:10:52.758 [2024-07-25 17:01:45.052333] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a3 00:10:52.758 [2024-07-25 17:01:45.053789] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a3 00:10:52.758 [2024-07-25 17:01:45.054957] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a3 00:10:52.758 [2024-07-25 17:01:45.056260] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a3 00:10:52.758 [2024-07-25 17:01:45.057374] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a3 00:10:53.018 [2024-07-25 17:01:45.363001] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a3 00:10:53.018 [2024-07-25 17:01:45.364614] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a3 00:10:53.018 [2024-07-25 17:01:45.364706] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a3 00:10:53.018 [2024-07-25 17:01:45.364761] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a3 00:10:53.018 [2024-07-25 17:01:45.364809] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a3 00:10:53.018 [2024-07-25 17:01:45.364853] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a3 00:10:53.018 [2024-07-25 17:01:45.364897] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a4 00:10:53.018 [2024-07-25 17:01:45.364937] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a4 00:10:53.018 [2024-07-25 17:01:45.381675] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a4 00:10:53.018 [2024-07-25 17:01:45.381741] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a4 00:10:53.018 [2024-07-25 17:01:45.381788] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a4 00:10:53.018 [2024-07-25 17:01:45.387663] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a4 00:10:53.018 17:01:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@131 -- # fio_status=0 00:10:53.018 17:01:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # wait 66430 00:10:53.018 [2024-07-25 17:01:45.389777] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a4 00:10:53.018 [2024-07-25 17:01:45.391939] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a4 00:10:53.018 [2024-07-25 17:01:45.393351] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a4 00:10:53.018 [2024-07-25 17:01:45.395325] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a4 00:10:53.018 [2024-07-25 17:01:45.396808] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a4 00:10:53.018 [2024-07-25 17:01:45.398736] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a4 00:10:53.018 [2024-07-25 17:01:45.400178] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a4 00:10:53.018 [2024-07-25 17:01:45.401795] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a4 00:10:53.018 [2024-07-25 17:01:45.403275] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a4 00:10:53.018 [2024-07-25 17:01:45.405012] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a4 00:10:53.018 [2024-07-25 17:01:45.406149] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a5 00:10:53.018 [2024-07-25 17:01:45.407789] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a5 00:10:53.018 [2024-07-25 17:01:45.409003] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a5 00:10:53.018 [2024-07-25 17:01:45.410557] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a5 00:10:53.018 [2024-07-25 17:01:45.411744] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a5 00:10:53.018 [2024-07-25 17:01:45.413231] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a5 00:10:53.018 [2024-07-25 17:01:45.414436] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a5 00:10:53.018 [2024-07-25 17:01:45.415851] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a5 00:10:53.018 [2024-07-25 17:01:45.417025] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a5 00:10:53.018 [2024-07-25 17:01:45.418167] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a5 00:10:53.019 [2024-07-25 17:01:45.419568] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a5 00:10:53.019 [2024-07-25 17:01:45.420816] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a5 00:10:53.019 [2024-07-25 17:01:45.421898] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a5 00:10:53.019 [2024-07-25 17:01:45.423320] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a5 00:10:53.019 [2024-07-25 17:01:45.424504] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a5 00:10:53.019 [2024-07-25 17:01:45.425517] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a5 00:10:53.019 [2024-07-25 17:01:45.426934] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a6 00:10:53.019 [2024-07-25 17:01:45.427952] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a6 00:10:53.019 [2024-07-25 17:01:45.429397] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a6 00:10:53.019 [2024-07-25 17:01:45.430451] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a6 00:10:53.019 [2024-07-25 17:01:45.431875] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a6 00:10:53.019 [2024-07-25 17:01:45.433142] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a6 00:10:53.019 [2024-07-25 17:01:45.434196] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a6 00:10:53.019 [2024-07-25 17:01:45.435286] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a6 00:10:53.019 [2024-07-25 17:01:45.436616] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a6 00:10:53.019 [2024-07-25 17:01:45.437661] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a6 00:10:53.019 [2024-07-25 17:01:45.439151] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a6 00:10:53.019 [2024-07-25 17:01:45.440187] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a6 00:10:53.019 [2024-07-25 17:01:45.441229] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a6 00:10:53.019 [2024-07-25 17:01:45.442768] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a6 00:10:53.019 [2024-07-25 17:01:45.444046] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a6 00:10:53.019 [2024-07-25 17:01:45.445265] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=10a6 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=694157312, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=695205888, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=696254464, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=697303040, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=698351616, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=699400192, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=700448768, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=701497344, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=702545920, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=703594496, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=704643072, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=705691648, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=706740224, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=707788800, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=708837376, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=709885952, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=710934528, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=711983104, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=689963008, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=691011584, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=692060160, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=693108736, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=644874240, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=645922816, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=646971392, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=648019968, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=649068544, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=713031680, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=650117120, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=714080256, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=651165696, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=715128832, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=716177408, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=717225984, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=652214272, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=718274560, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=653262848, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=654311424, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=655360000, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=719323136, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=720371712, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=656408576, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=657457152, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=658505728, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=721420288, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=659554304, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=660602880, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=722468864, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=723517440, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=661651456, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=662700032, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=663748608, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=724566016, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=664797184, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=665845760, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=666894336, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=725614592, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=726663168, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=667942912, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=668991488, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=727711744, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=728760320, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=670040064, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=729808896, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=671088640, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=730857472, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=731906048, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=732954624, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=734003200, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=672137216, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=673185792, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=735051776, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=674234368, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=675282944, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=676331520, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=677380096, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=736100352, buflen=1048576 00:10:53.019 fio: pid=66459, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=678428672, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=737148928, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=679477248, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=738197504, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=680525824, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=681574400, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=739246080, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=682622976, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=683671552, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=740294656, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=741343232, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=684720128, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=685768704, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: read offset=686817280, buflen=1048576 00:10:53.019 fio: io_u error on file /dev/sdb: Input/output error: write offset=742391808, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=743440384, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=744488960, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=687865856, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=688914432, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=689963008, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=691011584, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=692060160, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=693108736, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=745537536, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=694157312, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=695205888, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=696254464, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=746586112, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=747634688, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=748683264, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=749731840, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=697303040, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=750780416, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=751828992, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=698351616, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=699400192, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=700448768, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=701497344, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=702545920, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=752877568, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=703594496, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=753926144, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=754974720, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=756023296, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=704643072, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=757071872, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: read offset=705691648, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=758120448, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=759169024, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=760217600, buflen=1048576 00:10:53.020 fio: io_u error on file /dev/sdb: Input/output error: write offset=761266176, buflen=1048576 00:10:53.280 00:10:53.280 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=66458: Thu Jul 25 17:01:45 2024 00:10:53.280 read: IOPS=170, BW=152MiB/s (159MB/s)(455MiB/3002msec) 00:10:53.280 slat (usec): min=23, max=35210, avg=2399.14, stdev=4959.25 00:10:53.280 clat (msec): min=161, max=560, avg=347.16, stdev=89.39 00:10:53.280 lat (msec): min=161, max=565, avg=349.64, stdev=89.88 00:10:53.280 clat percentiles (msec): 00:10:53.280 | 1.00th=[ 176], 5.00th=[ 218], 10.00th=[ 234], 20.00th=[ 264], 00:10:53.280 | 30.00th=[ 296], 40.00th=[ 326], 50.00th=[ 351], 60.00th=[ 368], 00:10:53.280 | 70.00th=[ 384], 80.00th=[ 401], 90.00th=[ 506], 95.00th=[ 527], 00:10:53.280 | 99.00th=[ 542], 99.50th=[ 558], 99.90th=[ 558], 99.95th=[ 558], 00:10:53.280 | 99.99th=[ 558] 00:10:53.280 bw ( KiB/s): min=59392, max=235049, per=53.45%, avg=154676.40, stdev=64912.48, samples=5 00:10:53.280 iops : min= 58, max= 229, avg=150.80, stdev=63.25, samples=5 00:10:53.280 write: IOPS=181, BW=158MiB/s (166MB/s)(475MiB/3002msec); 0 zone resets 00:10:53.280 slat (usec): min=58, max=220014, avg=2943.31, stdev=10680.59 00:10:53.280 clat (msec): min=218, max=628, avg=393.64, stdev=88.99 00:10:53.280 lat (msec): min=218, max=628, avg=396.69, stdev=89.72 00:10:53.280 clat percentiles (msec): 00:10:53.280 | 1.00th=[ 224], 5.00th=[ 257], 10.00th=[ 275], 20.00th=[ 313], 00:10:53.280 | 30.00th=[ 347], 40.00th=[ 372], 50.00th=[ 397], 60.00th=[ 414], 00:10:53.280 | 70.00th=[ 426], 80.00th=[ 443], 90.00th=[ 542], 95.00th=[ 558], 00:10:53.280 | 99.00th=[ 584], 99.50th=[ 617], 99.90th=[ 625], 99.95th=[ 625], 00:10:53.280 | 99.99th=[ 625] 00:10:53.280 bw ( KiB/s): min=47104, max=241181, per=53.41%, avg=163669.80, stdev=71037.38, samples=5 00:10:53.280 iops : min= 46, max= 235, avg=159.60, stdev=69.18, samples=5 00:10:53.280 lat (msec) : 250=8.98%, 500=68.15%, 750=10.78% 00:10:53.280 cpu : usr=1.50%, sys=2.80%, ctx=500, majf=0, minf=2 00:10:53.280 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:10:53.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.280 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.280 issued rwts: total=512,546,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.280 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=66459: Thu Jul 25 17:01:45 2024 00:10:53.280 read: IOPS=178, BW=162MiB/s (170MB/s)(615MiB/3786msec) 00:10:53.280 slat (usec): min=24, max=63803, avg=2299.74, stdev=5591.34 00:10:53.280 clat (msec): min=123, max=692, avg=317.67, stdev=125.62 00:10:53.280 lat (msec): min=123, max=692, avg=320.15, stdev=126.03 00:10:53.280 clat percentiles (msec): 00:10:53.280 | 1.00th=[ 144], 5.00th=[ 169], 10.00th=[ 192], 20.00th=[ 213], 00:10:53.280 | 30.00th=[ 247], 40.00th=[ 268], 50.00th=[ 296], 60.00th=[ 309], 00:10:53.280 | 70.00th=[ 355], 80.00th=[ 388], 90.00th=[ 510], 95.00th=[ 634], 00:10:53.280 | 99.00th=[ 684], 99.50th=[ 684], 99.90th=[ 693], 99.95th=[ 693], 00:10:53.280 | 99.99th=[ 693] 00:10:53.280 bw ( KiB/s): min=157696, max=288768, per=70.27%, avg=203376.00, stdev=45885.63, samples=6 00:10:53.280 iops : min= 154, max= 282, avg=198.50, stdev=44.89, samples=6 00:10:53.280 write: IOPS=192, BW=174MiB/s (182MB/s)(658MiB/3786msec); 0 zone resets 00:10:53.280 slat (usec): min=56, max=391895, avg=3061.59, stdev=17169.40 00:10:53.280 clat (msec): min=168, max=751, avg=359.26, stdev=127.98 00:10:53.280 lat (msec): min=168, max=751, avg=361.93, stdev=128.69 00:10:53.280 clat percentiles (msec): 00:10:53.280 | 1.00th=[ 174], 5.00th=[ 205], 10.00th=[ 215], 20.00th=[ 271], 00:10:53.280 | 30.00th=[ 296], 40.00th=[ 309], 50.00th=[ 326], 60.00th=[ 351], 00:10:53.280 | 70.00th=[ 388], 80.00th=[ 418], 90.00th=[ 558], 95.00th=[ 659], 00:10:53.280 | 99.00th=[ 735], 99.50th=[ 735], 99.90th=[ 751], 99.95th=[ 751], 00:10:53.280 | 99.99th=[ 751] 00:10:53.280 bw ( KiB/s): min=153293, max=251904, per=69.71%, avg=213623.50, stdev=45057.36, samples=6 00:10:53.280 iops : min= 149, max= 246, avg=208.50, stdev=44.19, samples=6 00:10:53.280 lat (msec) : 250=21.63%, 500=57.82%, 750=11.35%, 1000=0.07% 00:10:53.280 cpu : usr=1.40%, sys=3.22%, ctx=699, majf=0, minf=1 00:10:53.280 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:10:53.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.280 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.280 issued rwts: total=674,727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.280 00:10:53.280 Run status group 0 (all jobs): 00:10:53.280 READ: bw=283MiB/s (296MB/s), 152MiB/s-162MiB/s (159MB/s-170MB/s), io=1070MiB (1122MB), run=3002-3786msec 00:10:53.280 WRITE: bw=299MiB/s (314MB/s), 158MiB/s-174MiB/s (166MB/s-182MB/s), io=1133MiB (1188MB), run=3002-3786msec 00:10:53.280 00:10:53.280 Disk stats (read/write): 00:10:53.280 sda: ios=503/476, merge=0/0, ticks=62459/85099, in_queue=147557, util=85.17% 00:10:53.280 sdb: ios=671/685, merge=0/0, ticks=80032/110718, in_queue=190749, util=88.45% 00:10:53.280 17:01:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # fio_status=2 00:10:53.280 iscsi hotplug test: fio failed as expected 00:10:53.280 17:01:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@134 -- # '[' 2 -eq 0 ']' 00:10:53.280 17:01:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@138 -- # echo 'iscsi hotplug test: fio failed as expected' 00:10:53.280 17:01:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@141 -- # iscsicleanup 00:10:53.280 Cleaning up iSCSI connection 00:10:53.280 17:01:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:10:53.280 17:01:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:10:53.280 Logging out of session [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:10:53.280 Logout of [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:10:53.280 17:01:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:10:53.280 17:01:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@985 -- # rm -rf 00:10:53.280 17:01:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2016-06.io.spdk:Target3 00:10:53.539 17:01:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@144 -- # delete_tmp_files 00:10:53.539 17:01:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@14 -- # rm -f /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/iscsi2.json 00:10:53.539 17:01:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@15 -- # rm -f ./local-job0-0-verify.state 00:10:53.539 17:01:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@16 -- # rm -f ./local-job1-1-verify.state 00:10:53.539 17:01:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:10:53.539 17:01:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@148 -- # killprocess 65918 00:10:53.539 17:01:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@950 -- # '[' -z 65918 ']' 00:10:53.539 17:01:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # kill -0 65918 00:10:53.539 17:01:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@955 -- # uname 00:10:53.539 17:01:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:53.539 17:01:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65918 00:10:53.539 17:01:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:53.539 17:01:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:53.539 killing process with pid 65918 00:10:53.539 17:01:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65918' 00:10:53.539 17:01:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@969 -- # kill 65918 00:10:53.539 17:01:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@974 -- # wait 65918 00:10:53.797 17:01:46 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@150 -- # iscsitestfini 00:10:53.797 17:01:46 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:10:53.797 00:10:53.797 real 0m28.780s 00:10:53.797 user 0m27.289s 00:10:53.797 sys 0m6.539s 00:10:53.797 17:01:46 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.797 ************************************ 00:10:53.797 END TEST iscsi_tgt_fio 00:10:53.797 ************************************ 00:10:53.797 17:01:46 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:10:54.055 17:01:46 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@38 -- # run_test iscsi_tgt_qos /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:10:54.055 17:01:46 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:54.055 17:01:46 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.055 17:01:46 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:10:54.055 ************************************ 00:10:54.055 START TEST iscsi_tgt_qos 00:10:54.055 ************************************ 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:10:54.055 * Looking for test storage... 00:10:54.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@11 -- # iscsitestinit 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@44 -- # '[' -z 10.0.0.1 ']' 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@49 -- # '[' -z 10.0.0.2 ']' 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@54 -- # MALLOC_BDEV_SIZE=64 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@55 -- # MALLOC_BLOCK_SIZE=512 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@56 -- # IOPS_RESULT= 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@57 -- # BANDWIDTH_RESULT= 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@58 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@60 -- # timing_enter start_iscsi_tgt 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@63 -- # pid=66615 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@64 -- # echo 'Process pid: 66615' 00:10:54.055 Process pid: 66615 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@62 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@65 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@66 -- # waitforlisten 66615 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@831 -- # '[' -z 66615 ']' 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.055 17:01:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:54.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.056 17:01:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.056 17:01:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:54.056 17:01:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:54.056 [2024-07-25 17:01:46.482496] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:54.056 [2024-07-25 17:01:46.482569] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66615 ] 00:10:54.314 [2024-07-25 17:01:46.623899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.314 [2024-07-25 17:01:46.701405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.880 17:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.880 17:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@864 -- # return 0 00:10:54.880 iscsi_tgt is listening. Running tests... 00:10:54.880 17:01:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@67 -- # echo 'iscsi_tgt is listening. Running tests...' 00:10:54.880 17:01:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@69 -- # timing_exit start_iscsi_tgt 00:10:54.880 17:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:54.880 17:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@71 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@72 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@73 -- # rpc_cmd bdev_malloc_create 64 512 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:55.139 Malloc0 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@78 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias Malloc0:0 1:2 64 -d 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.139 17:01:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@79 -- # sleep 1 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@81 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:10:56.074 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@82 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:10:56.074 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:56.074 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@84 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@87 -- # run_fio Malloc0 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:56.074 [2024-07-25 17:01:48.491134] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:10:56.074 "tick_rate": 2490000000, 00:10:56.074 "ticks": 1169883466784, 00:10:56.074 "bdevs": [ 00:10:56.074 { 00:10:56.074 "name": "Malloc0", 00:10:56.074 "bytes_read": 37376, 00:10:56.074 "num_read_ops": 3, 00:10:56.074 "bytes_written": 0, 00:10:56.074 "num_write_ops": 0, 00:10:56.074 "bytes_unmapped": 0, 00:10:56.074 "num_unmap_ops": 0, 00:10:56.074 "bytes_copied": 0, 00:10:56.074 "num_copy_ops": 0, 00:10:56.074 "read_latency_ticks": 864172, 00:10:56.074 "max_read_latency_ticks": 362920, 00:10:56.074 "min_read_latency_ticks": 241770, 00:10:56.074 "write_latency_ticks": 0, 00:10:56.074 "max_write_latency_ticks": 0, 00:10:56.074 "min_write_latency_ticks": 0, 00:10:56.074 "unmap_latency_ticks": 0, 00:10:56.074 "max_unmap_latency_ticks": 0, 00:10:56.074 "min_unmap_latency_ticks": 0, 00:10:56.074 "copy_latency_ticks": 0, 00:10:56.074 "max_copy_latency_ticks": 0, 00:10:56.074 "min_copy_latency_ticks": 0, 00:10:56.074 "io_error": {} 00:10:56.074 } 00:10:56.074 ] 00:10:56.074 }' 00:10:56.074 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:10:56.332 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=3 00:10:56.332 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:10:56.332 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=37376 00:10:56.332 17:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:10:56.332 [global] 00:10:56.332 thread=1 00:10:56.332 invalidate=1 00:10:56.332 rw=randread 00:10:56.332 time_based=1 00:10:56.332 runtime=5 00:10:56.332 ioengine=libaio 00:10:56.332 direct=1 00:10:56.332 bs=1024 00:10:56.332 iodepth=128 00:10:56.332 norandommap=1 00:10:56.332 numjobs=1 00:10:56.332 00:10:56.332 [job0] 00:10:56.332 filename=/dev/sda 00:10:56.332 queue_depth set to 113 (sda) 00:10:56.332 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:10:56.332 fio-3.35 00:10:56.332 Starting 1 thread 00:11:01.595 00:11:01.595 job0: (groupid=0, jobs=1): err= 0: pid=66696: Thu Jul 25 17:01:53 2024 00:11:01.595 read: IOPS=53.6k, BW=52.4MiB/s (54.9MB/s)(262MiB/5002msec) 00:11:01.595 slat (nsec): min=1927, max=1715.3k, avg=17313.28, stdev=52904.91 00:11:01.595 clat (usec): min=1497, max=4996, avg=2369.10, stdev=115.69 00:11:01.595 lat (usec): min=1509, max=5058, avg=2386.41, stdev=103.80 00:11:01.595 clat percentiles (usec): 00:11:01.595 | 1.00th=[ 2114], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2343], 00:11:01.595 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2376], 00:11:01.595 | 70.00th=[ 2376], 80.00th=[ 2409], 90.00th=[ 2409], 95.00th=[ 2442], 00:11:01.595 | 99.00th=[ 2540], 99.50th=[ 2704], 99.90th=[ 4178], 99.95th=[ 4686], 00:11:01.595 | 99.99th=[ 4948] 00:11:01.595 bw ( KiB/s): min=52830, max=53920, per=100.00%, avg=53683.33, stdev=336.03, samples=9 00:11:01.595 iops : min=52830, max=53920, avg=53683.56, stdev=336.07, samples=9 00:11:01.595 lat (msec) : 2=0.17%, 4=99.70%, 10=0.13% 00:11:01.595 cpu : usr=7.30%, sys=16.72%, ctx=154781, majf=0, minf=32 00:11:01.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:11:01.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.595 issued rwts: total=268200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.595 00:11:01.595 Run status group 0 (all jobs): 00:11:01.595 READ: bw=52.4MiB/s (54.9MB/s), 52.4MiB/s-52.4MiB/s (54.9MB/s-54.9MB/s), io=262MiB (275MB), run=5002-5002msec 00:11:01.596 00:11:01.596 Disk stats (read/write): 00:11:01.596 sda: ios=262296/0, merge=0/0, ticks=533181/0, in_queue=533181, util=98.15% 00:11:01.596 17:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:11:01.596 17:01:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.596 17:01:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:01.596 17:01:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.596 17:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:11:01.596 "tick_rate": 2490000000, 00:11:01.596 "ticks": 1183503452418, 00:11:01.596 "bdevs": [ 00:11:01.596 { 00:11:01.596 "name": "Malloc0", 00:11:01.596 "bytes_read": 275747328, 00:11:01.596 "num_read_ops": 268257, 00:11:01.596 "bytes_written": 0, 00:11:01.596 "num_write_ops": 0, 00:11:01.596 "bytes_unmapped": 0, 00:11:01.596 "num_unmap_ops": 0, 00:11:01.596 "bytes_copied": 0, 00:11:01.596 "num_copy_ops": 0, 00:11:01.596 "read_latency_ticks": 60052358020, 00:11:01.596 "max_read_latency_ticks": 429346, 00:11:01.596 "min_read_latency_ticks": 9966, 00:11:01.596 "write_latency_ticks": 0, 00:11:01.596 "max_write_latency_ticks": 0, 00:11:01.596 "min_write_latency_ticks": 0, 00:11:01.596 "unmap_latency_ticks": 0, 00:11:01.596 "max_unmap_latency_ticks": 0, 00:11:01.596 "min_unmap_latency_ticks": 0, 00:11:01.596 "copy_latency_ticks": 0, 00:11:01.596 "max_copy_latency_ticks": 0, 00:11:01.596 "min_copy_latency_ticks": 0, 00:11:01.596 "io_error": {} 00:11:01.596 } 00:11:01.596 ] 00:11:01.596 }' 00:11:01.596 17:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:11:01.596 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=268257 00:11:01.596 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:11:01.596 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=275747328 00:11:01.596 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=53650 00:11:01.596 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=55141990 00:11:01.596 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@90 -- # IOPS_LIMIT=26825 00:11:01.596 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@91 -- # BANDWIDTH_LIMIT=27570995 00:11:01.596 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@94 -- # READ_BANDWIDTH_LIMIT=13785497 00:11:01.596 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@98 -- # IOPS_LIMIT=26000 00:11:01.596 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@99 -- # BANDWIDTH_LIMIT_MB=26 00:11:01.596 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@100 -- # BANDWIDTH_LIMIT=27262976 00:11:01.596 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@101 -- # READ_BANDWIDTH_LIMIT_MB=13 00:11:01.596 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@102 -- # READ_BANDWIDTH_LIMIT=13631488 00:11:01.596 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@105 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 26000 00:11:01.596 17:01:54 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.596 17:01:54 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@106 -- # run_fio Malloc0 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:11:01.854 "tick_rate": 2490000000, 00:11:01.854 "ticks": 1183832548262, 00:11:01.854 "bdevs": [ 00:11:01.854 { 00:11:01.854 "name": "Malloc0", 00:11:01.854 "bytes_read": 275747328, 00:11:01.854 "num_read_ops": 268257, 00:11:01.854 "bytes_written": 0, 00:11:01.854 "num_write_ops": 0, 00:11:01.854 "bytes_unmapped": 0, 00:11:01.854 "num_unmap_ops": 0, 00:11:01.854 "bytes_copied": 0, 00:11:01.854 "num_copy_ops": 0, 00:11:01.854 "read_latency_ticks": 60052358020, 00:11:01.854 "max_read_latency_ticks": 429346, 00:11:01.854 "min_read_latency_ticks": 9966, 00:11:01.854 "write_latency_ticks": 0, 00:11:01.854 "max_write_latency_ticks": 0, 00:11:01.854 "min_write_latency_ticks": 0, 00:11:01.854 "unmap_latency_ticks": 0, 00:11:01.854 "max_unmap_latency_ticks": 0, 00:11:01.854 "min_unmap_latency_ticks": 0, 00:11:01.854 "copy_latency_ticks": 0, 00:11:01.854 "max_copy_latency_ticks": 0, 00:11:01.854 "min_copy_latency_ticks": 0, 00:11:01.854 "io_error": {} 00:11:01.854 } 00:11:01.854 ] 00:11:01.854 }' 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=268257 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=275747328 00:11:01.854 17:01:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:11:01.854 [global] 00:11:01.854 thread=1 00:11:01.854 invalidate=1 00:11:01.854 rw=randread 00:11:01.854 time_based=1 00:11:01.854 runtime=5 00:11:01.854 ioengine=libaio 00:11:01.854 direct=1 00:11:01.854 bs=1024 00:11:01.854 iodepth=128 00:11:01.854 norandommap=1 00:11:01.855 numjobs=1 00:11:01.855 00:11:01.855 [job0] 00:11:01.855 filename=/dev/sda 00:11:01.855 queue_depth set to 113 (sda) 00:11:02.113 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:11:02.113 fio-3.35 00:11:02.113 Starting 1 thread 00:11:07.418 00:11:07.418 job0: (groupid=0, jobs=1): err= 0: pid=66781: Thu Jul 25 17:01:59 2024 00:11:07.418 read: IOPS=26.0k, BW=25.4MiB/s (26.7MB/s)(127MiB/5004msec) 00:11:07.418 slat (usec): min=4, max=968, avg=34.17, stdev=112.42 00:11:07.418 clat (usec): min=1795, max=7783, avg=4878.33, stdev=275.96 00:11:07.418 lat (usec): min=1806, max=7789, avg=4912.50, stdev=259.84 00:11:07.418 clat percentiles (usec): 00:11:07.418 | 1.00th=[ 4228], 5.00th=[ 4228], 10.00th=[ 4555], 20.00th=[ 4686], 00:11:07.418 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4883], 60.00th=[ 4948], 00:11:07.418 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5145], 95.00th=[ 5211], 00:11:07.418 | 99.00th=[ 5276], 99.50th=[ 5342], 99.90th=[ 5538], 99.95th=[ 5866], 00:11:07.418 | 99.99th=[ 7635] 00:11:07.418 bw ( KiB/s): min=26018, max=26104, per=100.00%, avg=26081.67, stdev=28.46, samples=9 00:11:07.418 iops : min=26018, max=26104, avg=26081.67, stdev=28.46, samples=9 00:11:07.418 lat (msec) : 2=0.05%, 4=0.38%, 10=99.57% 00:11:07.418 cpu : usr=10.61%, sys=24.85%, ctx=70241, majf=0, minf=32 00:11:07.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:11:07.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.418 issued rwts: total=130314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.418 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.418 00:11:07.418 Run status group 0 (all jobs): 00:11:07.418 READ: bw=25.4MiB/s (26.7MB/s), 25.4MiB/s-25.4MiB/s (26.7MB/s-26.7MB/s), io=127MiB (133MB), run=5004-5004msec 00:11:07.418 00:11:07.418 Disk stats (read/write): 00:11:07.418 sda: ios=127418/0, merge=0/0, ticks=513131/0, in_queue=513131, util=98.15% 00:11:07.418 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:11:07.418 17:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.418 17:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:07.418 17:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.418 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:11:07.418 "tick_rate": 2490000000, 00:11:07.418 "ticks": 1197435349564, 00:11:07.418 "bdevs": [ 00:11:07.418 { 00:11:07.418 "name": "Malloc0", 00:11:07.418 "bytes_read": 409188864, 00:11:07.418 "num_read_ops": 398571, 00:11:07.418 "bytes_written": 0, 00:11:07.418 "num_write_ops": 0, 00:11:07.418 "bytes_unmapped": 0, 00:11:07.418 "num_unmap_ops": 0, 00:11:07.418 "bytes_copied": 0, 00:11:07.418 "num_copy_ops": 0, 00:11:07.418 "read_latency_ticks": 690865646082, 00:11:07.418 "max_read_latency_ticks": 6543000, 00:11:07.418 "min_read_latency_ticks": 9966, 00:11:07.418 "write_latency_ticks": 0, 00:11:07.418 "max_write_latency_ticks": 0, 00:11:07.418 "min_write_latency_ticks": 0, 00:11:07.418 "unmap_latency_ticks": 0, 00:11:07.418 "max_unmap_latency_ticks": 0, 00:11:07.418 "min_unmap_latency_ticks": 0, 00:11:07.418 "copy_latency_ticks": 0, 00:11:07.418 "max_copy_latency_ticks": 0, 00:11:07.418 "min_copy_latency_ticks": 0, 00:11:07.418 "io_error": {} 00:11:07.418 } 00:11:07.418 ] 00:11:07.418 }' 00:11:07.418 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:11:07.418 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=398571 00:11:07.418 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:11:07.418 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=409188864 00:11:07.418 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=26062 00:11:07.418 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=26688307 00:11:07.418 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@107 -- # verify_qos_limits 26062 26000 00:11:07.418 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=26062 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=26000 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@110 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@111 -- # run_fio Malloc0 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:11:07.419 "tick_rate": 2490000000, 00:11:07.419 "ticks": 1197784228336, 00:11:07.419 "bdevs": [ 00:11:07.419 { 00:11:07.419 "name": "Malloc0", 00:11:07.419 "bytes_read": 409188864, 00:11:07.419 "num_read_ops": 398571, 00:11:07.419 "bytes_written": 0, 00:11:07.419 "num_write_ops": 0, 00:11:07.419 "bytes_unmapped": 0, 00:11:07.419 "num_unmap_ops": 0, 00:11:07.419 "bytes_copied": 0, 00:11:07.419 "num_copy_ops": 0, 00:11:07.419 "read_latency_ticks": 690865646082, 00:11:07.419 "max_read_latency_ticks": 6543000, 00:11:07.419 "min_read_latency_ticks": 9966, 00:11:07.419 "write_latency_ticks": 0, 00:11:07.419 "max_write_latency_ticks": 0, 00:11:07.419 "min_write_latency_ticks": 0, 00:11:07.419 "unmap_latency_ticks": 0, 00:11:07.419 "max_unmap_latency_ticks": 0, 00:11:07.419 "min_unmap_latency_ticks": 0, 00:11:07.419 "copy_latency_ticks": 0, 00:11:07.419 "max_copy_latency_ticks": 0, 00:11:07.419 "min_copy_latency_ticks": 0, 00:11:07.419 "io_error": {} 00:11:07.419 } 00:11:07.419 ] 00:11:07.419 }' 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=398571 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=409188864 00:11:07.419 17:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:11:07.419 [global] 00:11:07.419 thread=1 00:11:07.419 invalidate=1 00:11:07.419 rw=randread 00:11:07.419 time_based=1 00:11:07.419 runtime=5 00:11:07.419 ioengine=libaio 00:11:07.419 direct=1 00:11:07.419 bs=1024 00:11:07.419 iodepth=128 00:11:07.419 norandommap=1 00:11:07.419 numjobs=1 00:11:07.419 00:11:07.419 [job0] 00:11:07.419 filename=/dev/sda 00:11:07.419 queue_depth set to 113 (sda) 00:11:07.677 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:11:07.677 fio-3.35 00:11:07.677 Starting 1 thread 00:11:13.018 00:11:13.018 job0: (groupid=0, jobs=1): err= 0: pid=66881: Thu Jul 25 17:02:05 2024 00:11:13.018 read: IOPS=53.6k, BW=52.4MiB/s (54.9MB/s)(262MiB/5002msec) 00:11:13.018 slat (nsec): min=1926, max=600577, avg=17232.19, stdev=51735.08 00:11:13.018 clat (usec): min=1472, max=4189, avg=2368.72, stdev=105.92 00:11:13.018 lat (usec): min=1484, max=4194, avg=2385.96, stdev=93.33 00:11:13.018 clat percentiles (usec): 00:11:13.018 | 1.00th=[ 2089], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2343], 00:11:13.018 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2376], 00:11:13.018 | 70.00th=[ 2409], 80.00th=[ 2409], 90.00th=[ 2442], 95.00th=[ 2442], 00:11:13.018 | 99.00th=[ 2671], 99.50th=[ 2835], 99.90th=[ 3392], 99.95th=[ 3523], 00:11:13.018 | 99.99th=[ 3884] 00:11:13.018 bw ( KiB/s): min=53317, max=54316, per=100.00%, avg=53646.78, stdev=304.67, samples=9 00:11:13.018 iops : min=53317, max=54316, avg=53647.00, stdev=304.90, samples=9 00:11:13.018 lat (msec) : 2=0.60%, 4=99.39%, 10=0.01% 00:11:13.018 cpu : usr=8.10%, sys=17.64%, ctx=159684, majf=0, minf=32 00:11:13.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:11:13.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.018 issued rwts: total=268251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.018 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.018 00:11:13.018 Run status group 0 (all jobs): 00:11:13.018 READ: bw=52.4MiB/s (54.9MB/s), 52.4MiB/s-52.4MiB/s (54.9MB/s-54.9MB/s), io=262MiB (275MB), run=5002-5002msec 00:11:13.018 00:11:13.018 Disk stats (read/write): 00:11:13.018 sda: ios=262329/0, merge=0/0, ticks=526980/0, in_queue=526980, util=98.13% 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:11:13.018 "tick_rate": 2490000000, 00:11:13.018 "ticks": 1211427615032, 00:11:13.018 "bdevs": [ 00:11:13.018 { 00:11:13.018 "name": "Malloc0", 00:11:13.018 "bytes_read": 683877888, 00:11:13.018 "num_read_ops": 666822, 00:11:13.018 "bytes_written": 0, 00:11:13.018 "num_write_ops": 0, 00:11:13.018 "bytes_unmapped": 0, 00:11:13.018 "num_unmap_ops": 0, 00:11:13.018 "bytes_copied": 0, 00:11:13.018 "num_copy_ops": 0, 00:11:13.018 "read_latency_ticks": 750695476002, 00:11:13.018 "max_read_latency_ticks": 6543000, 00:11:13.018 "min_read_latency_ticks": 9966, 00:11:13.018 "write_latency_ticks": 0, 00:11:13.018 "max_write_latency_ticks": 0, 00:11:13.018 "min_write_latency_ticks": 0, 00:11:13.018 "unmap_latency_ticks": 0, 00:11:13.018 "max_unmap_latency_ticks": 0, 00:11:13.018 "min_unmap_latency_ticks": 0, 00:11:13.018 "copy_latency_ticks": 0, 00:11:13.018 "max_copy_latency_ticks": 0, 00:11:13.018 "min_copy_latency_ticks": 0, 00:11:13.018 "io_error": {} 00:11:13.018 } 00:11:13.018 ] 00:11:13.018 }' 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=666822 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=683877888 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=53650 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=54937804 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@112 -- # '[' 53650 -gt 26000 ']' 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@115 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 26000 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@116 -- # run_fio Malloc0 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.018 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:11:13.018 "tick_rate": 2490000000, 00:11:13.018 "ticks": 1211746334704, 00:11:13.018 "bdevs": [ 00:11:13.018 { 00:11:13.018 "name": "Malloc0", 00:11:13.018 "bytes_read": 683877888, 00:11:13.018 "num_read_ops": 666822, 00:11:13.018 "bytes_written": 0, 00:11:13.018 "num_write_ops": 0, 00:11:13.018 "bytes_unmapped": 0, 00:11:13.018 "num_unmap_ops": 0, 00:11:13.018 "bytes_copied": 0, 00:11:13.018 "num_copy_ops": 0, 00:11:13.019 "read_latency_ticks": 750695476002, 00:11:13.019 "max_read_latency_ticks": 6543000, 00:11:13.019 "min_read_latency_ticks": 9966, 00:11:13.019 "write_latency_ticks": 0, 00:11:13.019 "max_write_latency_ticks": 0, 00:11:13.019 "min_write_latency_ticks": 0, 00:11:13.019 "unmap_latency_ticks": 0, 00:11:13.019 "max_unmap_latency_ticks": 0, 00:11:13.019 "min_unmap_latency_ticks": 0, 00:11:13.019 "copy_latency_ticks": 0, 00:11:13.019 "max_copy_latency_ticks": 0, 00:11:13.019 "min_copy_latency_ticks": 0, 00:11:13.019 "io_error": {} 00:11:13.019 } 00:11:13.019 ] 00:11:13.019 }' 00:11:13.019 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:11:13.019 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=666822 00:11:13.019 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:11:13.019 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=683877888 00:11:13.019 17:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:11:13.019 [global] 00:11:13.019 thread=1 00:11:13.019 invalidate=1 00:11:13.019 rw=randread 00:11:13.019 time_based=1 00:11:13.019 runtime=5 00:11:13.019 ioengine=libaio 00:11:13.019 direct=1 00:11:13.019 bs=1024 00:11:13.019 iodepth=128 00:11:13.019 norandommap=1 00:11:13.019 numjobs=1 00:11:13.019 00:11:13.019 [job0] 00:11:13.019 filename=/dev/sda 00:11:13.019 queue_depth set to 113 (sda) 00:11:13.278 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:11:13.278 fio-3.35 00:11:13.278 Starting 1 thread 00:11:18.565 00:11:18.565 job0: (groupid=0, jobs=1): err= 0: pid=66966: Thu Jul 25 17:02:10 2024 00:11:18.565 read: IOPS=26.0k, BW=25.4MiB/s (26.7MB/s)(127MiB/5004msec) 00:11:18.565 slat (usec): min=2, max=1525, avg=34.10, stdev=112.23 00:11:18.565 clat (usec): min=2076, max=8259, avg=4879.90, stdev=246.57 00:11:18.565 lat (usec): min=2099, max=8277, avg=4914.00, stdev=225.14 00:11:18.565 clat percentiles (usec): 00:11:18.565 | 1.00th=[ 4178], 5.00th=[ 4228], 10.00th=[ 4555], 20.00th=[ 4817], 00:11:18.565 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 4948], 60.00th=[ 4948], 00:11:18.565 | 70.00th=[ 4948], 80.00th=[ 5014], 90.00th=[ 5080], 95.00th=[ 5145], 00:11:18.565 | 99.00th=[ 5276], 99.50th=[ 5276], 99.90th=[ 5800], 99.95th=[ 6390], 00:11:18.565 | 99.99th=[ 7832] 00:11:18.565 bw ( KiB/s): min=25972, max=26106, per=100.00%, avg=26072.00, stdev=42.25, samples=9 00:11:18.565 iops : min=25972, max=26106, avg=26072.00, stdev=42.25, samples=9 00:11:18.565 lat (msec) : 4=0.51%, 10=99.49% 00:11:18.565 cpu : usr=11.65%, sys=23.65%, ctx=70364, majf=0, minf=32 00:11:18.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:11:18.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.565 issued rwts: total=130286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.565 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.565 00:11:18.565 Run status group 0 (all jobs): 00:11:18.565 READ: bw=25.4MiB/s (26.7MB/s), 25.4MiB/s-25.4MiB/s (26.7MB/s-26.7MB/s), io=127MiB (133MB), run=5004-5004msec 00:11:18.565 00:11:18.565 Disk stats (read/write): 00:11:18.565 sda: ios=127374/0, merge=0/0, ticks=513361/0, in_queue=513361, util=98.13% 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:11:18.565 "tick_rate": 2490000000, 00:11:18.565 "ticks": 1225333967556, 00:11:18.565 "bdevs": [ 00:11:18.565 { 00:11:18.565 "name": "Malloc0", 00:11:18.565 "bytes_read": 817290752, 00:11:18.565 "num_read_ops": 797108, 00:11:18.565 "bytes_written": 0, 00:11:18.565 "num_write_ops": 0, 00:11:18.565 "bytes_unmapped": 0, 00:11:18.565 "num_unmap_ops": 0, 00:11:18.565 "bytes_copied": 0, 00:11:18.565 "num_copy_ops": 0, 00:11:18.565 "read_latency_ticks": 1370699493216, 00:11:18.565 "max_read_latency_ticks": 8730452, 00:11:18.565 "min_read_latency_ticks": 9966, 00:11:18.565 "write_latency_ticks": 0, 00:11:18.565 "max_write_latency_ticks": 0, 00:11:18.565 "min_write_latency_ticks": 0, 00:11:18.565 "unmap_latency_ticks": 0, 00:11:18.565 "max_unmap_latency_ticks": 0, 00:11:18.565 "min_unmap_latency_ticks": 0, 00:11:18.565 "copy_latency_ticks": 0, 00:11:18.565 "max_copy_latency_ticks": 0, 00:11:18.565 "min_copy_latency_ticks": 0, 00:11:18.565 "io_error": {} 00:11:18.565 } 00:11:18.565 ] 00:11:18.565 }' 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=797108 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=817290752 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=26057 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=26682572 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@117 -- # verify_qos_limits 26057 26000 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=26057 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=26000 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:11:18.565 I/O rate limiting tests successful 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@119 -- # echo 'I/O rate limiting tests successful' 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@122 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 --rw_mbytes_per_sec 26 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@123 -- # run_fio Malloc0 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:11:18.565 "tick_rate": 2490000000, 00:11:18.565 "ticks": 1225702564352, 00:11:18.565 "bdevs": [ 00:11:18.565 { 00:11:18.565 "name": "Malloc0", 00:11:18.565 "bytes_read": 817290752, 00:11:18.565 "num_read_ops": 797108, 00:11:18.565 "bytes_written": 0, 00:11:18.565 "num_write_ops": 0, 00:11:18.565 "bytes_unmapped": 0, 00:11:18.565 "num_unmap_ops": 0, 00:11:18.565 "bytes_copied": 0, 00:11:18.565 "num_copy_ops": 0, 00:11:18.565 "read_latency_ticks": 1370699493216, 00:11:18.565 "max_read_latency_ticks": 8730452, 00:11:18.565 "min_read_latency_ticks": 9966, 00:11:18.565 "write_latency_ticks": 0, 00:11:18.565 "max_write_latency_ticks": 0, 00:11:18.565 "min_write_latency_ticks": 0, 00:11:18.565 "unmap_latency_ticks": 0, 00:11:18.565 "max_unmap_latency_ticks": 0, 00:11:18.565 "min_unmap_latency_ticks": 0, 00:11:18.565 "copy_latency_ticks": 0, 00:11:18.565 "max_copy_latency_ticks": 0, 00:11:18.565 "min_copy_latency_ticks": 0, 00:11:18.565 "io_error": {} 00:11:18.565 } 00:11:18.565 ] 00:11:18.565 }' 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=797108 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=817290752 00:11:18.565 17:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:11:18.565 [global] 00:11:18.565 thread=1 00:11:18.565 invalidate=1 00:11:18.565 rw=randread 00:11:18.565 time_based=1 00:11:18.565 runtime=5 00:11:18.565 ioengine=libaio 00:11:18.565 direct=1 00:11:18.565 bs=1024 00:11:18.565 iodepth=128 00:11:18.565 norandommap=1 00:11:18.565 numjobs=1 00:11:18.565 00:11:18.565 [job0] 00:11:18.565 filename=/dev/sda 00:11:18.823 queue_depth set to 113 (sda) 00:11:18.823 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:11:18.823 fio-3.35 00:11:18.823 Starting 1 thread 00:11:24.089 00:11:24.089 job0: (groupid=0, jobs=1): err= 0: pid=67061: Thu Jul 25 17:02:16 2024 00:11:24.089 read: IOPS=26.7k, BW=26.0MiB/s (27.3MB/s)(130MiB/5004msec) 00:11:24.089 slat (usec): min=4, max=1296, avg=33.26, stdev=112.48 00:11:24.089 clat (usec): min=1790, max=8715, avg=4765.69, stdev=366.16 00:11:24.089 lat (usec): min=1800, max=8733, avg=4798.95, stdev=358.32 00:11:24.089 clat percentiles (usec): 00:11:24.089 | 1.00th=[ 3785], 5.00th=[ 4178], 10.00th=[ 4228], 20.00th=[ 4424], 00:11:24.089 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:11:24.089 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5145], 95.00th=[ 5211], 00:11:24.089 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 5932], 99.95th=[ 6652], 00:11:24.089 | 99.99th=[ 8455] 00:11:24.089 bw ( KiB/s): min=26614, max=26730, per=100.00%, avg=26696.44, stdev=37.17, samples=9 00:11:24.089 iops : min=26614, max=26730, avg=26696.44, stdev=37.17, samples=9 00:11:24.089 lat (msec) : 2=0.06%, 4=2.09%, 10=97.85% 00:11:24.089 cpu : usr=11.63%, sys=23.79%, ctx=71664, majf=0, minf=32 00:11:24.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:11:24.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.089 issued rwts: total=133411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.089 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.089 00:11:24.089 Run status group 0 (all jobs): 00:11:24.089 READ: bw=26.0MiB/s (27.3MB/s), 26.0MiB/s-26.0MiB/s (27.3MB/s-27.3MB/s), io=130MiB (137MB), run=5004-5004msec 00:11:24.089 00:11:24.089 Disk stats (read/write): 00:11:24.089 sda: ios=130420/0, merge=0/0, ticks=509805/0, in_queue=509805, util=98.15% 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:11:24.089 "tick_rate": 2490000000, 00:11:24.089 "ticks": 1239291458304, 00:11:24.089 "bdevs": [ 00:11:24.089 { 00:11:24.089 "name": "Malloc0", 00:11:24.089 "bytes_read": 953903616, 00:11:24.089 "num_read_ops": 930519, 00:11:24.089 "bytes_written": 0, 00:11:24.089 "num_write_ops": 0, 00:11:24.089 "bytes_unmapped": 0, 00:11:24.089 "num_unmap_ops": 0, 00:11:24.089 "bytes_copied": 0, 00:11:24.089 "num_copy_ops": 0, 00:11:24.089 "read_latency_ticks": 1972094319942, 00:11:24.089 "max_read_latency_ticks": 8730452, 00:11:24.089 "min_read_latency_ticks": 9966, 00:11:24.089 "write_latency_ticks": 0, 00:11:24.089 "max_write_latency_ticks": 0, 00:11:24.089 "min_write_latency_ticks": 0, 00:11:24.089 "unmap_latency_ticks": 0, 00:11:24.089 "max_unmap_latency_ticks": 0, 00:11:24.089 "min_unmap_latency_ticks": 0, 00:11:24.089 "copy_latency_ticks": 0, 00:11:24.089 "max_copy_latency_ticks": 0, 00:11:24.089 "min_copy_latency_ticks": 0, 00:11:24.089 "io_error": {} 00:11:24.089 } 00:11:24.089 ] 00:11:24.089 }' 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=930519 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=953903616 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=26682 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=27322572 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@124 -- # verify_qos_limits 27322572 27262976 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=27322572 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=27262976 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@127 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 0 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@128 -- # run_fio Malloc0 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:11:24.089 "tick_rate": 2490000000, 00:11:24.089 "ticks": 1239590279698, 00:11:24.089 "bdevs": [ 00:11:24.089 { 00:11:24.089 "name": "Malloc0", 00:11:24.089 "bytes_read": 953903616, 00:11:24.089 "num_read_ops": 930519, 00:11:24.089 "bytes_written": 0, 00:11:24.089 "num_write_ops": 0, 00:11:24.089 "bytes_unmapped": 0, 00:11:24.089 "num_unmap_ops": 0, 00:11:24.089 "bytes_copied": 0, 00:11:24.089 "num_copy_ops": 0, 00:11:24.089 "read_latency_ticks": 1972094319942, 00:11:24.089 "max_read_latency_ticks": 8730452, 00:11:24.089 "min_read_latency_ticks": 9966, 00:11:24.089 "write_latency_ticks": 0, 00:11:24.089 "max_write_latency_ticks": 0, 00:11:24.089 "min_write_latency_ticks": 0, 00:11:24.089 "unmap_latency_ticks": 0, 00:11:24.089 "max_unmap_latency_ticks": 0, 00:11:24.089 "min_unmap_latency_ticks": 0, 00:11:24.089 "copy_latency_ticks": 0, 00:11:24.089 "max_copy_latency_ticks": 0, 00:11:24.089 "min_copy_latency_ticks": 0, 00:11:24.089 "io_error": {} 00:11:24.089 } 00:11:24.089 ] 00:11:24.089 }' 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=930519 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:11:24.089 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=953903616 00:11:24.090 17:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:11:24.348 [global] 00:11:24.348 thread=1 00:11:24.348 invalidate=1 00:11:24.348 rw=randread 00:11:24.348 time_based=1 00:11:24.348 runtime=5 00:11:24.348 ioengine=libaio 00:11:24.348 direct=1 00:11:24.348 bs=1024 00:11:24.348 iodepth=128 00:11:24.348 norandommap=1 00:11:24.348 numjobs=1 00:11:24.348 00:11:24.348 [job0] 00:11:24.348 filename=/dev/sda 00:11:24.348 queue_depth set to 113 (sda) 00:11:24.348 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:11:24.348 fio-3.35 00:11:24.348 Starting 1 thread 00:11:29.614 00:11:29.614 job0: (groupid=0, jobs=1): err= 0: pid=67150: Thu Jul 25 17:02:21 2024 00:11:29.614 read: IOPS=53.2k, BW=52.0MiB/s (54.5MB/s)(260MiB/5002msec) 00:11:29.614 slat (nsec): min=1941, max=493550, avg=17335.80, stdev=52341.91 00:11:29.614 clat (usec): min=1414, max=4772, avg=2386.93, stdev=122.10 00:11:29.614 lat (usec): min=1420, max=4774, avg=2404.27, stdev=111.24 00:11:29.614 clat percentiles (usec): 00:11:29.614 | 1.00th=[ 2089], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2343], 00:11:29.614 | 30.00th=[ 2376], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2409], 00:11:29.614 | 70.00th=[ 2409], 80.00th=[ 2442], 90.00th=[ 2474], 95.00th=[ 2540], 00:11:29.614 | 99.00th=[ 2802], 99.50th=[ 2933], 99.90th=[ 3523], 99.95th=[ 3654], 00:11:29.614 | 99.99th=[ 3949] 00:11:29.614 bw ( KiB/s): min=52246, max=53856, per=99.98%, avg=53212.89, stdev=541.39, samples=9 00:11:29.614 iops : min=52246, max=53856, avg=53212.89, stdev=541.39, samples=9 00:11:29.614 lat (msec) : 2=0.61%, 4=99.38%, 10=0.01% 00:11:29.614 cpu : usr=7.70%, sys=18.05%, ctx=153362, majf=0, minf=32 00:11:29.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:11:29.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:29.614 issued rwts: total=266210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.614 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:29.614 00:11:29.614 Run status group 0 (all jobs): 00:11:29.614 READ: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=260MiB (273MB), run=5002-5002msec 00:11:29.614 00:11:29.614 Disk stats (read/write): 00:11:29.614 sda: ios=260256/0, merge=0/0, ticks=527919/0, in_queue=527919, util=98.13% 00:11:29.614 17:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:11:29.614 17:02:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.614 17:02:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:29.614 17:02:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.614 17:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:11:29.614 "tick_rate": 2490000000, 00:11:29.614 "ticks": 1253195404392, 00:11:29.614 "bdevs": [ 00:11:29.614 { 00:11:29.614 "name": "Malloc0", 00:11:29.614 "bytes_read": 1226502656, 00:11:29.614 "num_read_ops": 1196729, 00:11:29.614 "bytes_written": 0, 00:11:29.614 "num_write_ops": 0, 00:11:29.614 "bytes_unmapped": 0, 00:11:29.614 "num_unmap_ops": 0, 00:11:29.614 "bytes_copied": 0, 00:11:29.614 "num_copy_ops": 0, 00:11:29.615 "read_latency_ticks": 2031729266748, 00:11:29.615 "max_read_latency_ticks": 8730452, 00:11:29.615 "min_read_latency_ticks": 9862, 00:11:29.615 "write_latency_ticks": 0, 00:11:29.615 "max_write_latency_ticks": 0, 00:11:29.615 "min_write_latency_ticks": 0, 00:11:29.615 "unmap_latency_ticks": 0, 00:11:29.615 "max_unmap_latency_ticks": 0, 00:11:29.615 "min_unmap_latency_ticks": 0, 00:11:29.615 "copy_latency_ticks": 0, 00:11:29.615 "max_copy_latency_ticks": 0, 00:11:29.615 "min_copy_latency_ticks": 0, 00:11:29.615 "io_error": {} 00:11:29.615 } 00:11:29.615 ] 00:11:29.615 }' 00:11:29.615 17:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:11:29.615 17:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=1196729 00:11:29.615 17:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:11:29.615 17:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=1226502656 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=53242 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=54519808 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@129 -- # '[' 54519808 -gt 27262976 ']' 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@132 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 26 --r_mbytes_per_sec 13 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@133 -- # run_fio Malloc0 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:11:29.615 "tick_rate": 2490000000, 00:11:29.615 "ticks": 1253514113740, 00:11:29.615 "bdevs": [ 00:11:29.615 { 00:11:29.615 "name": "Malloc0", 00:11:29.615 "bytes_read": 1226502656, 00:11:29.615 "num_read_ops": 1196729, 00:11:29.615 "bytes_written": 0, 00:11:29.615 "num_write_ops": 0, 00:11:29.615 "bytes_unmapped": 0, 00:11:29.615 "num_unmap_ops": 0, 00:11:29.615 "bytes_copied": 0, 00:11:29.615 "num_copy_ops": 0, 00:11:29.615 "read_latency_ticks": 2031729266748, 00:11:29.615 "max_read_latency_ticks": 8730452, 00:11:29.615 "min_read_latency_ticks": 9862, 00:11:29.615 "write_latency_ticks": 0, 00:11:29.615 "max_write_latency_ticks": 0, 00:11:29.615 "min_write_latency_ticks": 0, 00:11:29.615 "unmap_latency_ticks": 0, 00:11:29.615 "max_unmap_latency_ticks": 0, 00:11:29.615 "min_unmap_latency_ticks": 0, 00:11:29.615 "copy_latency_ticks": 0, 00:11:29.615 "max_copy_latency_ticks": 0, 00:11:29.615 "min_copy_latency_ticks": 0, 00:11:29.615 "io_error": {} 00:11:29.615 } 00:11:29.615 ] 00:11:29.615 }' 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:11:29.615 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=1196729 00:11:29.874 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:11:29.874 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=1226502656 00:11:29.874 17:02:22 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:11:29.874 [global] 00:11:29.874 thread=1 00:11:29.874 invalidate=1 00:11:29.874 rw=randread 00:11:29.874 time_based=1 00:11:29.874 runtime=5 00:11:29.874 ioengine=libaio 00:11:29.874 direct=1 00:11:29.874 bs=1024 00:11:29.874 iodepth=128 00:11:29.874 norandommap=1 00:11:29.874 numjobs=1 00:11:29.874 00:11:29.874 [job0] 00:11:29.874 filename=/dev/sda 00:11:29.874 queue_depth set to 113 (sda) 00:11:29.874 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:11:29.874 fio-3.35 00:11:29.874 Starting 1 thread 00:11:35.144 00:11:35.144 job0: (groupid=0, jobs=1): err= 0: pid=67235: Thu Jul 25 17:02:27 2024 00:11:35.144 read: IOPS=13.3k, BW=13.0MiB/s (13.7MB/s)(65.2MiB/5009msec) 00:11:35.144 slat (nsec): min=1956, max=2439.4k, avg=70727.76, stdev=212694.36 00:11:35.144 clat (usec): min=2872, max=17238, avg=9527.72, stdev=515.38 00:11:35.144 lat (usec): min=2877, max=17245, avg=9598.45, stdev=509.40 00:11:35.144 clat percentiles (usec): 00:11:35.144 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 8979], 20.00th=[ 9110], 00:11:35.144 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[ 9896], 00:11:35.144 | 70.00th=[ 9896], 80.00th=[ 9896], 90.00th=[10028], 95.00th=[10159], 00:11:35.144 | 99.00th=[10421], 99.50th=[10552], 99.90th=[12125], 99.95th=[14353], 00:11:35.144 | 99.99th=[16319] 00:11:35.144 bw ( KiB/s): min=13284, max=13366, per=100.00%, avg=13346.78, stdev=25.89, samples=9 00:11:35.144 iops : min=13284, max=13366, avg=13346.78, stdev=25.89, samples=9 00:11:35.144 lat (msec) : 4=0.06%, 10=89.61%, 20=10.33% 00:11:35.144 cpu : usr=5.29%, sys=12.50%, ctx=37140, majf=0, minf=32 00:11:35.144 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:35.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:35.144 issued rwts: total=66779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:35.144 00:11:35.144 Run status group 0 (all jobs): 00:11:35.144 READ: bw=13.0MiB/s (13.7MB/s), 13.0MiB/s-13.0MiB/s (13.7MB/s-13.7MB/s), io=65.2MiB (68.4MB), run=5009-5009msec 00:11:35.144 00:11:35.144 Disk stats (read/write): 00:11:35.144 sda: ios=65227/0, merge=0/0, ticks=540903/0, in_queue=540903, util=98.15% 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:11:35.144 "tick_rate": 2490000000, 00:11:35.144 "ticks": 1267082782446, 00:11:35.144 "bdevs": [ 00:11:35.144 { 00:11:35.144 "name": "Malloc0", 00:11:35.144 "bytes_read": 1294884352, 00:11:35.144 "num_read_ops": 1263508, 00:11:35.144 "bytes_written": 0, 00:11:35.144 "num_write_ops": 0, 00:11:35.144 "bytes_unmapped": 0, 00:11:35.144 "num_unmap_ops": 0, 00:11:35.144 "bytes_copied": 0, 00:11:35.144 "num_copy_ops": 0, 00:11:35.144 "read_latency_ticks": 2746071816450, 00:11:35.144 "max_read_latency_ticks": 14159698, 00:11:35.144 "min_read_latency_ticks": 9862, 00:11:35.144 "write_latency_ticks": 0, 00:11:35.144 "max_write_latency_ticks": 0, 00:11:35.144 "min_write_latency_ticks": 0, 00:11:35.144 "unmap_latency_ticks": 0, 00:11:35.144 "max_unmap_latency_ticks": 0, 00:11:35.144 "min_unmap_latency_ticks": 0, 00:11:35.144 "copy_latency_ticks": 0, 00:11:35.144 "max_copy_latency_ticks": 0, 00:11:35.144 "min_copy_latency_ticks": 0, 00:11:35.144 "io_error": {} 00:11:35.144 } 00:11:35.144 ] 00:11:35.144 }' 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=1263508 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=1294884352 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=13355 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=13676339 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@134 -- # verify_qos_limits 13676339 13631488 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=13676339 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=13631488 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:11:35.144 I/O bandwidth limiting tests successful 00:11:35.144 Cleaning up iSCSI connection 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@136 -- # echo 'I/O bandwidth limiting tests successful' 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@138 -- # iscsicleanup 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:11:35.144 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:11:35.403 Logging out of session [sid: 12, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:35.404 Logout of [sid: 12, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@985 -- # rm -rf 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@139 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:Target1 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@141 -- # rm -f ./local-job0-0-verify.state 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@143 -- # killprocess 66615 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@950 -- # '[' -z 66615 ']' 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # kill -0 66615 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@955 -- # uname 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66615 00:11:35.404 killing process with pid 66615 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66615' 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@969 -- # kill 66615 00:11:35.404 17:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@974 -- # wait 66615 00:11:35.663 17:02:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@145 -- # iscsitestfini 00:11:35.663 17:02:28 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:35.663 00:11:35.663 real 0m41.758s 00:11:35.663 user 0m37.170s 00:11:35.663 sys 0m14.066s 00:11:35.663 17:02:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:35.663 ************************************ 00:11:35.663 END TEST iscsi_tgt_qos 00:11:35.663 ************************************ 00:11:35.663 17:02:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:11:35.663 17:02:28 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@39 -- # run_test iscsi_tgt_ip_migration /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:11:35.663 17:02:28 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:35.663 17:02:28 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.663 17:02:28 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:35.663 ************************************ 00:11:35.663 START TEST iscsi_tgt_ip_migration 00:11:35.663 ************************************ 00:11:35.663 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:11:35.922 * Looking for test storage... 00:11:35.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@11 -- # iscsitestinit 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@13 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@14 -- # pids=() 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@16 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:35.922 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:35.922 #define SPDK_CONFIG_H 00:11:35.922 #define SPDK_CONFIG_APPS 1 00:11:35.922 #define SPDK_CONFIG_ARCH native 00:11:35.922 #undef SPDK_CONFIG_ASAN 00:11:35.922 #undef SPDK_CONFIG_AVAHI 00:11:35.922 #undef SPDK_CONFIG_CET 00:11:35.922 #define SPDK_CONFIG_COVERAGE 1 00:11:35.922 #define SPDK_CONFIG_CROSS_PREFIX 00:11:35.922 #undef SPDK_CONFIG_CRYPTO 00:11:35.922 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:35.922 #undef SPDK_CONFIG_CUSTOMOCF 00:11:35.922 #undef SPDK_CONFIG_DAOS 00:11:35.922 #define SPDK_CONFIG_DAOS_DIR 00:11:35.922 #define SPDK_CONFIG_DEBUG 1 00:11:35.922 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:35.922 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:35.922 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:35.922 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:35.922 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:35.922 #undef SPDK_CONFIG_DPDK_UADK 00:11:35.922 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:35.922 #define SPDK_CONFIG_EXAMPLES 1 00:11:35.922 #undef SPDK_CONFIG_FC 00:11:35.922 #define SPDK_CONFIG_FC_PATH 00:11:35.922 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:35.922 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:35.922 #undef SPDK_CONFIG_FUSE 00:11:35.922 #undef SPDK_CONFIG_FUZZER 00:11:35.922 #define SPDK_CONFIG_FUZZER_LIB 00:11:35.922 #undef SPDK_CONFIG_GOLANG 00:11:35.922 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:35.922 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:35.922 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:35.922 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:35.922 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:35.922 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:35.922 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:35.922 #define SPDK_CONFIG_IDXD 1 00:11:35.922 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:35.922 #undef SPDK_CONFIG_IPSEC_MB 00:11:35.922 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:35.922 #define SPDK_CONFIG_ISAL 1 00:11:35.922 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:35.922 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:35.922 #define SPDK_CONFIG_LIBDIR 00:11:35.922 #undef SPDK_CONFIG_LTO 00:11:35.922 #define SPDK_CONFIG_MAX_LCORES 128 00:11:35.922 #define SPDK_CONFIG_NVME_CUSE 1 00:11:35.923 #undef SPDK_CONFIG_OCF 00:11:35.923 #define SPDK_CONFIG_OCF_PATH 00:11:35.923 #define SPDK_CONFIG_OPENSSL_PATH 00:11:35.923 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:35.923 #define SPDK_CONFIG_PGO_DIR 00:11:35.923 #undef SPDK_CONFIG_PGO_USE 00:11:35.923 #define SPDK_CONFIG_PREFIX /usr/local 00:11:35.923 #undef SPDK_CONFIG_RAID5F 00:11:35.923 #define SPDK_CONFIG_RBD 1 00:11:35.923 #define SPDK_CONFIG_RDMA 1 00:11:35.923 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:35.923 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:35.923 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:35.923 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:35.923 #define SPDK_CONFIG_SHARED 1 00:11:35.923 #undef SPDK_CONFIG_SMA 00:11:35.923 #define SPDK_CONFIG_TESTS 1 00:11:35.923 #undef SPDK_CONFIG_TSAN 00:11:35.923 #define SPDK_CONFIG_UBLK 1 00:11:35.923 #define SPDK_CONFIG_UBSAN 1 00:11:35.923 #undef SPDK_CONFIG_UNIT_TESTS 00:11:35.923 #undef SPDK_CONFIG_URING 00:11:35.923 #define SPDK_CONFIG_URING_PATH 00:11:35.923 #undef SPDK_CONFIG_URING_ZNS 00:11:35.923 #undef SPDK_CONFIG_USDT 00:11:35.923 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:35.923 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:35.923 #undef SPDK_CONFIG_VFIO_USER 00:11:35.923 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:35.923 #define SPDK_CONFIG_VHOST 1 00:11:35.923 #define SPDK_CONFIG_VIRTIO 1 00:11:35.923 #undef SPDK_CONFIG_VTUNE 00:11:35.923 #define SPDK_CONFIG_VTUNE_DIR 00:11:35.923 #define SPDK_CONFIG_WERROR 1 00:11:35.923 #define SPDK_CONFIG_WPDK_DIR 00:11:35.923 #undef SPDK_CONFIG_XNVME 00:11:35.923 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@17 -- # NETMASK=127.0.0.0/24 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@18 -- # MIGRATION_ADDRESS=127.0.0.2 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@56 -- # echo 'Running ip migration tests' 00:11:35.923 Running ip migration tests 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@57 -- # timing_enter start_iscsi_tgt_0 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@58 -- # rpc_first_addr=/var/tmp/spdk0.sock 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@59 -- # iscsi_tgt_start /var/tmp/spdk0.sock 1 00:11:35.923 Process pid: 67372 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=67372 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 67372' 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -m 1 --wait-for-rpc 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 67372 /var/tmp/spdk0.sock 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@831 -- # '[' -z 67372 ']' 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk0.sock 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:11:35.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:35.923 17:02:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:35.923 [2024-07-25 17:02:28.340574] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:35.923 [2024-07-25 17:02:28.340795] [ DPDK EAL parameters: iscsi --no-shconf -c 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67372 ] 00:11:36.182 [2024-07-25 17:02:28.482455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.182 [2024-07-25 17:02:28.581005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.773 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:36.773 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@864 -- # return 0 00:11:36.773 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_set_options -o 30 -a 64 00:11:36.773 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.773 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:36.773 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.773 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk0.sock framework_start_init 00:11:36.773 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.773 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:37.032 iscsi_tgt is listening. Running tests... 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk0.sock bdev_malloc_create 64 512 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:37.032 Malloc0 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@60 -- # timing_exit start_iscsi_tgt_0 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@62 -- # timing_enter start_iscsi_tgt_1 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@63 -- # rpc_second_addr=/var/tmp/spdk1.sock 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@64 -- # iscsi_tgt_start /var/tmp/spdk1.sock 2 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=67405 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -m 2 --wait-for-rpc 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 67405' 00:11:37.032 Process pid: 67405 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 67405 /var/tmp/spdk1.sock 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@831 -- # '[' -z 67405 ']' 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk1.sock 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:11:37.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:37.032 17:02:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 [2024-07-25 17:02:29.520746] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:37.292 [2024-07-25 17:02:29.520950] [ DPDK EAL parameters: iscsi --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67405 ] 00:11:37.292 [2024-07-25 17:02:29.661218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.292 [2024-07-25 17:02:29.751653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@864 -- # return 0 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_set_options -o 30 -a 64 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk1.sock framework_start_init 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:38.235 iscsi_tgt is listening. Running tests... 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk1.sock bdev_malloc_create 64 512 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:38.235 Malloc0 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@65 -- # timing_exit start_iscsi_tgt_1 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@67 -- # rpc_add_target_node /var/tmp/spdk0.sock 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:11:38.235 17:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@69 -- # sleep 1 00:11:39.611 17:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@70 -- # iscsiadm -m discovery -t sendtargets -p 127.0.0.2:3260 00:11:39.611 127.0.0.2:3260,1 iqn.2016-06.io.spdk:target1 00:11:39.611 17:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@71 -- # sleep 1 00:11:40.548 17:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@72 -- # iscsiadm -m node --login -p 127.0.0.2:3260 00:11:40.548 Logging in to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:11:40.548 Login to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:11:40.548 17:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@73 -- # waitforiscsidevices 1 00:11:40.549 17:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@116 -- # local num=1 00:11:40.549 17:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:40.549 17:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:40.549 17:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:40.549 17:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:40.549 [2024-07-25 17:02:32.760290] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:40.549 17:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # n=1 00:11:40.549 17:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:11:40.549 17:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@123 -- # return 0 00:11:40.549 17:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@77 -- # fiopid=67484 00:11:40.549 17:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@78 -- # sleep 3 00:11:40.549 17:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 32 -t randrw -r 12 00:11:40.549 [global] 00:11:40.549 thread=1 00:11:40.549 invalidate=1 00:11:40.549 rw=randrw 00:11:40.549 time_based=1 00:11:40.549 runtime=12 00:11:40.549 ioengine=libaio 00:11:40.549 direct=1 00:11:40.549 bs=4096 00:11:40.549 iodepth=32 00:11:40.549 norandommap=1 00:11:40.549 numjobs=1 00:11:40.549 00:11:40.549 [job0] 00:11:40.549 filename=/dev/sda 00:11:40.549 queue_depth set to 113 (sda) 00:11:40.549 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 00:11:40.549 fio-3.35 00:11:40.549 Starting 1 thread 00:11:40.549 [2024-07-25 17:02:32.965373] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:43.832 17:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@80 -- # rpc_cmd -s /var/tmp/spdk0.sock spdk_kill_instance SIGTERM 00:11:43.832 17:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.832 17:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:43.832 17:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.832 17:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@81 -- # wait 67372 00:11:43.832 17:02:36 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@83 -- # rpc_add_target_node /var/tmp/spdk1.sock 00:11:43.832 17:02:36 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:11:43.832 17:02:36 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:11:43.832 17:02:36 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.832 17:02:36 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:43.832 17:02:36 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.832 17:02:36 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:11:43.832 17:02:36 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.832 17:02:36 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:43.832 17:02:36 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.832 17:02:36 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:11:43.832 17:02:36 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@85 -- # wait 67484 00:11:53.802 [2024-07-25 17:02:45.073689] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:53.802 00:11:53.802 job0: (groupid=0, jobs=1): err= 0: pid=67510: Thu Jul 25 17:02:45 2024 00:11:53.802 read: IOPS=19.1k, BW=74.7MiB/s (78.3MB/s)(896MiB/12001msec) 00:11:53.802 slat (nsec): min=1972, max=1215.4k, avg=5023.65, stdev=5187.56 00:11:53.802 clat (usec): min=127, max=2007.3k, avg=836.87, stdev=16223.56 00:11:53.802 lat (usec): min=136, max=2007.3k, avg=841.89, stdev=16223.54 00:11:53.802 clat percentiles (usec): 00:11:53.802 | 1.00th=[ 469], 5.00th=[ 529], 10.00th=[ 562], 20.00th=[ 611], 00:11:53.802 | 30.00th=[ 644], 40.00th=[ 668], 50.00th=[ 685], 60.00th=[ 717], 00:11:53.802 | 70.00th=[ 766], 80.00th=[ 807], 90.00th=[ 865], 95.00th=[ 906], 00:11:53.802 | 99.00th=[ 988], 99.50th=[ 1037], 99.90th=[ 1876], 99.95th=[ 2474], 00:11:53.802 | 99.99th=[ 4883] 00:11:53.802 bw ( KiB/s): min=37384, max=93960, per=100.00%, avg=87442.00, stdev=14684.28, samples=20 00:11:53.802 iops : min= 9346, max=23490, avg=21860.40, stdev=3671.04, samples=20 00:11:53.802 write: IOPS=19.1k, BW=74.7MiB/s (78.3MB/s)(896MiB/12001msec); 0 zone resets 00:11:53.802 slat (nsec): min=1891, max=693954, avg=6231.49, stdev=6380.73 00:11:53.802 clat (usec): min=126, max=2007.2k, avg=824.62, stdev=17269.39 00:11:53.802 lat (usec): min=140, max=2007.3k, avg=830.85, stdev=17269.37 00:11:53.802 clat percentiles (usec): 00:11:53.802 | 1.00th=[ 445], 5.00th=[ 502], 10.00th=[ 537], 20.00th=[ 586], 00:11:53.802 | 30.00th=[ 619], 40.00th=[ 635], 50.00th=[ 660], 60.00th=[ 685], 00:11:53.802 | 70.00th=[ 717], 80.00th=[ 766], 90.00th=[ 832], 95.00th=[ 881], 00:11:53.802 | 99.00th=[ 947], 99.50th=[ 979], 99.90th=[ 1876], 99.95th=[ 2573], 00:11:53.802 | 99.99th=[ 4883] 00:11:53.802 bw ( KiB/s): min=37144, max=93608, per=100.00%, avg=87424.40, stdev=14868.49, samples=20 00:11:53.803 iops : min= 9286, max=23402, avg=21856.10, stdev=3717.12, samples=20 00:11:53.803 lat (usec) : 250=0.01%, 500=3.84%, 750=68.66%, 1000=26.89% 00:11:53.803 lat (msec) : 2=0.52%, 4=0.07%, 10=0.01%, >=2000=0.01% 00:11:53.803 cpu : usr=5.81%, sys=12.96%, ctx=148934, majf=0, minf=1 00:11:53.803 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% 00:11:53.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:53.803 issued rwts: total=229439,229481,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.803 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:53.803 00:11:53.803 Run status group 0 (all jobs): 00:11:53.803 READ: bw=74.7MiB/s (78.3MB/s), 74.7MiB/s-74.7MiB/s (78.3MB/s-78.3MB/s), io=896MiB (940MB), run=12001-12001msec 00:11:53.803 WRITE: bw=74.7MiB/s (78.3MB/s), 74.7MiB/s-74.7MiB/s (78.3MB/s-78.3MB/s), io=896MiB (940MB), run=12001-12001msec 00:11:53.803 00:11:53.803 Disk stats (read/write): 00:11:53.803 sda: ios=227172/227183, merge=0/0, ticks=173746/179491, in_queue=353238, util=99.35% 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@87 -- # trap - SIGINT SIGTERM EXIT 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@89 -- # iscsicleanup 00:11:53.803 Cleaning up iSCSI connection 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:11:53.803 Logging out of session [sid: 13, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:11:53.803 Logout of [sid: 13, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@985 -- # rm -rf 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@91 -- # rpc_cmd -s /var/tmp/spdk1.sock spdk_kill_instance SIGTERM 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@92 -- # wait 67405 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@93 -- # iscsitestfini 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:53.803 00:11:53.803 real 0m17.448s 00:11:53.803 user 0m21.813s 00:11:53.803 sys 0m4.307s 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.803 ************************************ 00:11:53.803 END TEST iscsi_tgt_ip_migration 00:11:53.803 ************************************ 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:53.803 17:02:45 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@40 -- # run_test iscsi_tgt_trace_record /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:11:53.803 17:02:45 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:53.803 17:02:45 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.803 17:02:45 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:53.803 ************************************ 00:11:53.803 START TEST iscsi_tgt_trace_record 00:11:53.803 ************************************ 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:11:53.803 * Looking for test storage... 00:11:53.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@11 -- # iscsitestinit 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@13 -- # TRACE_TMP_FOLDER=./tmp-trace 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@14 -- # TRACE_RECORD_OUTPUT=./tmp-trace/record.trace 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@15 -- # TRACE_RECORD_NOTICE_LOG=./tmp-trace/record.notice 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@16 -- # TRACE_TOOL_LOG=./tmp-trace/trace.log 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@22 -- # '[' -z 10.0.0.1 ']' 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@27 -- # '[' -z 10.0.0.2 ']' 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@32 -- # NUM_TRACE_ENTRIES=4096 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@33 -- # MALLOC_BDEV_SIZE=64 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@34 -- # MALLOC_BLOCK_SIZE=4096 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@36 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@37 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@39 -- # timing_enter start_iscsi_tgt 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:11:53.803 start iscsi_tgt with trace enabled 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@41 -- # echo 'start iscsi_tgt with trace enabled' 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@43 -- # iscsi_pid=67709 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@44 -- # echo 'Process pid: 67709' 00:11:53.803 Process pid: 67709 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@42 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xf --num-trace-entries 4096 --tpoint-group all 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@46 -- # trap 'killprocess $iscsi_pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@48 -- # waitforlisten 67709 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@831 -- # '[' -z 67709 ']' 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:53.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:53.803 17:02:45 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:11:53.803 [2024-07-25 17:02:45.840350] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:53.803 [2024-07-25 17:02:45.840422] [ DPDK EAL parameters: iscsi --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67709 ] 00:11:53.803 [2024-07-25 17:02:45.983551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.803 [2024-07-25 17:02:46.068653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask all specified. 00:11:53.803 [2024-07-25 17:02:46.068703] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s iscsi -p 67709' to capture a snapshot of events at runtime. 00:11:53.803 [2024-07-25 17:02:46.068729] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.803 [2024-07-25 17:02:46.068737] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.803 [2024-07-25 17:02:46.068744] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/iscsi_trace.pid67709 for offline analysis/debug. 00:11:53.803 [2024-07-25 17:02:46.068942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.803 [2024-07-25 17:02:46.069817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.803 [2024-07-25 17:02:46.069938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.803 [2024-07-25 17:02:46.069939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.371 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:54.371 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@864 -- # return 0 00:11:54.371 iscsi_tgt is listening. Running tests... 00:11:54.371 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@50 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:54.371 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@52 -- # timing_exit start_iscsi_tgt 00:11:54.371 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:54.371 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:11:54.371 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@54 -- # mkdir -p ./tmp-trace 00:11:54.372 Trace record pid: 67744 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@56 -- # record_pid=67744 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@57 -- # echo 'Trace record pid: 67744' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace_record -s iscsi -p 67709 -f ./tmp-trace/record.trace -q 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@59 -- # RPCS= 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@60 -- # RPCS+='iscsi_create_portal_group 1 10.0.0.1:3260\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@61 -- # RPCS+='iscsi_create_initiator_group 2 ANY 10.0.0.2/32\n' 00:11:54.372 Create bdevs and target nodes 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@63 -- # echo 'Create bdevs and target nodes' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@64 -- # CONNECTION_NUMBER=15 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # seq 0 15 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc0\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target0 Target0_alias Malloc0:0 1:2 256 -d\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc1\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target1 Target1_alias Malloc1:0 1:2 256 -d\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc2\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target2 Target2_alias Malloc2:0 1:2 256 -d\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc3\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target3 Target3_alias Malloc3:0 1:2 256 -d\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc4\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target4 Target4_alias Malloc4:0 1:2 256 -d\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc5\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target5 Target5_alias Malloc5:0 1:2 256 -d\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc6\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target6 Target6_alias Malloc6:0 1:2 256 -d\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc7\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target7 Target7_alias Malloc7:0 1:2 256 -d\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc8\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target8 Target8_alias Malloc8:0 1:2 256 -d\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc9\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target9 Target9_alias Malloc9:0 1:2 256 -d\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc10\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target10 Target10_alias Malloc10:0 1:2 256 -d\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc11\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target11 Target11_alias Malloc11:0 1:2 256 -d\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc12\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target12 Target12_alias Malloc12:0 1:2 256 -d\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc13\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target13 Target13_alias Malloc13:0 1:2 256 -d\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc14\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target14 Target14_alias Malloc14:0 1:2 256 -d\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc15\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target15 Target15_alias Malloc15:0 1:2 256 -d\n' 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:54.372 17:02:46 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # echo -e iscsi_create_portal_group 1 '10.0.0.1:3260\niscsi_create_initiator_group' 2 ANY '10.0.0.2/32\nbdev_malloc_create' 64 4096 -b 'Malloc0\niscsi_create_target_node' Target0 Target0_alias Malloc0:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc1\niscsi_create_target_node' Target1 Target1_alias Malloc1:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc2\niscsi_create_target_node' Target2 Target2_alias Malloc2:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc3\niscsi_create_target_node' Target3 Target3_alias Malloc3:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc4\niscsi_create_target_node' Target4 Target4_alias Malloc4:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc5\niscsi_create_target_node' Target5 Target5_alias Malloc5:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc6\niscsi_create_target_node' Target6 Target6_alias Malloc6:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc7\niscsi_create_target_node' Target7 Target7_alias Malloc7:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc8\niscsi_create_target_node' Target8 Target8_alias Malloc8:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc9\niscsi_create_target_node' Target9 Target9_alias Malloc9:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc10\niscsi_create_target_node' Target10 Target10_alias Malloc10:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc11\niscsi_create_target_node' Target11 Target11_alias Malloc11:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc12\niscsi_create_target_node' Target12 Target12_alias Malloc12:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc13\niscsi_create_target_node' Target13 Target13_alias Malloc13:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc14\niscsi_create_target_node' Target14 Target14_alias Malloc14:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc15\niscsi_create_target_node' Target15 Target15_alias Malloc15:0 1:2 256 '-d\n' 00:11:55.309 Malloc0 00:11:55.309 Malloc1 00:11:55.309 Malloc2 00:11:55.309 Malloc3 00:11:55.309 Malloc4 00:11:55.309 Malloc5 00:11:55.309 Malloc6 00:11:55.309 Malloc7 00:11:55.309 Malloc8 00:11:55.309 Malloc9 00:11:55.309 Malloc10 00:11:55.309 Malloc11 00:11:55.309 Malloc12 00:11:55.309 Malloc13 00:11:55.309 Malloc14 00:11:55.309 Malloc15 00:11:55.309 17:02:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@71 -- # sleep 1 00:11:56.304 17:02:48 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@73 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:11:56.304 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target0 00:11:56.304 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:11:56.304 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:11:56.304 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:11:56.304 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:11:56.304 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:11:56.304 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:11:56.304 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:11:56.304 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:11:56.304 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:11:56.304 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:11:56.304 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:11:56.304 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:11:56.304 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:11:56.304 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:11:56.304 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:11:56.304 17:02:48 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@74 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:11:56.304 [2024-07-25 17:02:48.557661] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:56.304 [2024-07-25 17:02:48.573129] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:56.304 [2024-07-25 17:02:48.610136] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:56.304 [2024-07-25 17:02:48.617586] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:56.304 [2024-07-25 17:02:48.639123] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:56.304 [2024-07-25 17:02:48.662109] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:56.304 [2024-07-25 17:02:48.686000] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:56.304 [2024-07-25 17:02:48.695455] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:56.304 [2024-07-25 17:02:48.739991] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:56.304 [2024-07-25 17:02:48.760606] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:56.563 [2024-07-25 17:02:48.781810] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:56.563 [2024-07-25 17:02:48.833853] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:56.563 [2024-07-25 17:02:48.846560] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:56.563 [2024-07-25 17:02:48.876898] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:56.563 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:11:56.563 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:56.563 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:11:56.563 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:11:56.563 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:11:56.563 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:11:56.563 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:11:56.563 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:11:56.563 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:11:56.563 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:11:56.563 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:11:56.563 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:11:56.563 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:11:56.563 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:11:56.563 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:11:56.563 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:11:56.563 Login to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:11:56.563 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:56.563 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:11:56.563 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:11:56.563 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:11:56.563 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:11:56.563 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:11:56.563 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:11:56.563 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:11:56.563 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:11:56.563 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:11:56.563 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:11:56.563 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:11:56.563 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:11:56.564 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:11:56.564 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:11:56.564 17:02:48 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@75 -- # waitforiscsidevices 16 00:11:56.564 17:02:48 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@116 -- # local num=16 00:11:56.564 17:02:48 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:56.564 17:02:48 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:56.564 17:02:48 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:56.564 17:02:48 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:56.564 [2024-07-25 17:02:48.901406] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:56.564 [2024-07-25 17:02:48.903925] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:56.564 17:02:48 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # n=16 00:11:56.564 17:02:48 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@120 -- # '[' 16 -ne 16 ']' 00:11:56.564 17:02:48 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@123 -- # return 0 00:11:56.564 17:02:48 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@77 -- # trap 'iscsicleanup; killprocess $iscsi_pid; killprocess $record_pid; delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:56.564 Running FIO 00:11:56.564 17:02:48 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@79 -- # echo 'Running FIO' 00:11:56.564 17:02:48 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 00:11:56.564 [global] 00:11:56.564 thread=1 00:11:56.564 invalidate=1 00:11:56.564 rw=randrw 00:11:56.564 time_based=1 00:11:56.564 runtime=1 00:11:56.564 ioengine=libaio 00:11:56.564 direct=1 00:11:56.564 bs=131072 00:11:56.564 iodepth=32 00:11:56.564 norandommap=1 00:11:56.564 numjobs=1 00:11:56.564 00:11:56.564 [job0] 00:11:56.564 filename=/dev/sda 00:11:56.564 [job1] 00:11:56.564 filename=/dev/sdb 00:11:56.564 [job2] 00:11:56.564 filename=/dev/sdc 00:11:56.564 [job3] 00:11:56.564 filename=/dev/sdd 00:11:56.564 [job4] 00:11:56.564 filename=/dev/sde 00:11:56.564 [job5] 00:11:56.564 filename=/dev/sdf 00:11:56.564 [job6] 00:11:56.564 filename=/dev/sdg 00:11:56.564 [job7] 00:11:56.564 filename=/dev/sdh 00:11:56.564 [job8] 00:11:56.564 filename=/dev/sdi 00:11:56.564 [job9] 00:11:56.564 filename=/dev/sdj 00:11:56.564 [job10] 00:11:56.564 filename=/dev/sdk 00:11:56.564 [job11] 00:11:56.564 filename=/dev/sdl 00:11:56.564 [job12] 00:11:56.564 filename=/dev/sdm 00:11:56.564 [job13] 00:11:56.564 filename=/dev/sdn 00:11:56.564 [job14] 00:11:56.564 filename=/dev/sdo 00:11:56.564 [job15] 00:11:56.564 filename=/dev/sdp 00:11:57.132 queue_depth set to 113 (sda) 00:11:57.132 queue_depth set to 113 (sdb) 00:11:57.132 queue_depth set to 113 (sdc) 00:11:57.132 queue_depth set to 113 (sdd) 00:11:57.132 queue_depth set to 113 (sde) 00:11:57.132 queue_depth set to 113 (sdf) 00:11:57.132 queue_depth set to 113 (sdg) 00:11:57.132 queue_depth set to 113 (sdh) 00:11:57.132 queue_depth set to 113 (sdi) 00:11:57.132 queue_depth set to 113 (sdj) 00:11:57.132 queue_depth set to 113 (sdk) 00:11:57.132 queue_depth set to 113 (sdl) 00:11:57.132 queue_depth set to 113 (sdm) 00:11:57.392 queue_depth set to 113 (sdn) 00:11:57.392 queue_depth set to 113 (sdo) 00:11:57.392 queue_depth set to 113 (sdp) 00:11:57.392 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:57.392 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:57.392 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:57.392 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:57.392 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:57.392 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:57.392 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:57.392 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:57.392 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:57.392 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:57.392 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:57.392 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:57.392 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:57.392 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:57.392 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:57.392 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:57.392 fio-3.35 00:11:57.392 Starting 16 threads 00:11:57.392 [2024-07-25 17:02:49.804017] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:57.392 [2024-07-25 17:02:49.808110] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:57.392 [2024-07-25 17:02:49.812052] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:57.392 [2024-07-25 17:02:49.816528] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:57.392 [2024-07-25 17:02:49.818993] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:57.392 [2024-07-25 17:02:49.821737] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:57.392 [2024-07-25 17:02:49.824288] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:57.392 [2024-07-25 17:02:49.828105] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:57.392 [2024-07-25 17:02:49.829980] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:57.392 [2024-07-25 17:02:49.831788] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:57.392 [2024-07-25 17:02:49.833783] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:57.392 [2024-07-25 17:02:49.836807] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:57.392 [2024-07-25 17:02:49.838170] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:57.392 [2024-07-25 17:02:49.839582] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:57.392 [2024-07-25 17:02:49.840969] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:57.392 [2024-07-25 17:02:49.843475] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.772 Trace-record missed 120 trace entries 00:11:58.772 [2024-07-25 17:02:51.136298] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.772 Trace-record missed 2058 trace entries 00:11:58.772 [2024-07-25 17:02:51.138984] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.772 [2024-07-25 17:02:51.141166] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.772 Trace-record missed 2737 trace entries 00:11:58.772 [2024-07-25 17:02:51.144887] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.772 [2024-07-25 17:02:51.146893] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.772 [2024-07-25 17:02:51.149268] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.772 [2024-07-25 17:02:51.151406] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.772 [2024-07-25 17:02:51.153743] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.772 [2024-07-25 17:02:51.155747] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.772 [2024-07-25 17:02:51.158267] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.772 [2024-07-25 17:02:51.161219] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.772 [2024-07-25 17:02:51.164058] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.772 [2024-07-25 17:02:51.166139] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.772 [2024-07-25 17:02:51.168713] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.772 [2024-07-25 17:02:51.171224] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.773 [2024-07-25 17:02:51.173283] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:58.773 00:11:58.773 job0: (groupid=0, jobs=1): err= 0: pid=68114: Thu Jul 25 17:02:51 2024 00:11:58.773 read: IOPS=628, BW=78.5MiB/s (82.3MB/s)(80.6MiB/1027msec) 00:11:58.773 slat (usec): min=5, max=435, avg=18.55, stdev=33.52 00:11:58.773 clat (usec): min=2609, max=30960, avg=7973.33, stdev=4234.26 00:11:58.773 lat (usec): min=2630, max=30976, avg=7991.88, stdev=4232.30 00:11:58.773 clat percentiles (usec): 00:11:58.773 | 1.00th=[ 3326], 5.00th=[ 5407], 10.00th=[ 5735], 20.00th=[ 6128], 00:11:58.773 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6718], 60.00th=[ 6980], 00:11:58.773 | 70.00th=[ 7439], 80.00th=[ 8717], 90.00th=[10552], 95.00th=[16319], 00:11:58.773 | 99.00th=[28967], 99.50th=[30016], 99.90th=[31065], 99.95th=[31065], 00:11:58.773 | 99.99th=[31065] 00:11:58.773 bw ( KiB/s): min=78236, max=84996, per=6.67%, avg=81616.00, stdev=4780.04, samples=2 00:11:58.773 iops : min= 611, max= 664, avg=637.50, stdev=37.48, samples=2 00:11:58.773 write: IOPS=666, BW=83.3MiB/s (87.3MB/s)(85.5MiB/1027msec); 0 zone resets 00:11:58.773 slat (usec): min=6, max=1503, avg=28.53, stdev=68.08 00:11:58.773 clat (usec): min=9313, max=67948, avg=40424.83, stdev=7303.07 00:11:58.773 lat (usec): min=9348, max=67972, avg=40453.36, stdev=7304.71 00:11:58.773 clat percentiles (usec): 00:11:58.773 | 1.00th=[20317], 5.00th=[27395], 10.00th=[31327], 20.00th=[35390], 00:11:58.773 | 30.00th=[38011], 40.00th=[39584], 50.00th=[41157], 60.00th=[42730], 00:11:58.773 | 70.00th=[43779], 80.00th=[45351], 90.00th=[48497], 95.00th=[50594], 00:11:58.773 | 99.00th=[60031], 99.50th=[64750], 99.90th=[67634], 99.95th=[67634], 00:11:58.773 | 99.99th=[67634] 00:11:58.773 bw ( KiB/s): min=80032, max=87272, per=6.68%, avg=83652.00, stdev=5119.45, samples=2 00:11:58.773 iops : min= 625, max= 681, avg=653.00, stdev=39.60, samples=2 00:11:58.773 lat (msec) : 4=0.90%, 10=42.14%, 20=4.14%, 50=49.74%, 100=3.09% 00:11:58.773 cpu : usr=0.78%, sys=2.05%, ctx=1108, majf=0, minf=1 00:11:58.773 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=97.7%, >=64=0.0% 00:11:58.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.773 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:58.773 issued rwts: total=645,684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.773 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:58.773 job1: (groupid=0, jobs=1): err= 0: pid=68115: Thu Jul 25 17:02:51 2024 00:11:58.773 read: IOPS=572, BW=71.5MiB/s (75.0MB/s)(73.8MiB/1031msec) 00:11:58.773 slat (usec): min=5, max=1226, avg=18.72, stdev=55.60 00:11:58.773 clat (usec): min=428, max=35659, avg=8332.16, stdev=4366.09 00:11:58.773 lat (usec): min=448, max=35683, avg=8350.88, stdev=4363.97 00:11:58.773 clat percentiles (usec): 00:11:58.773 | 1.00th=[ 4015], 5.00th=[ 5538], 10.00th=[ 5932], 20.00th=[ 6259], 00:11:58.773 | 30.00th=[ 6456], 40.00th=[ 6718], 50.00th=[ 7046], 60.00th=[ 7373], 00:11:58.773 | 70.00th=[ 7963], 80.00th=[ 9110], 90.00th=[11731], 95.00th=[15008], 00:11:58.773 | 99.00th=[31065], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:11:58.773 | 99.99th=[35914] 00:11:58.773 bw ( KiB/s): min=73216, max=77312, per=6.15%, avg=75264.00, stdev=2896.31, samples=2 00:11:58.773 iops : min= 572, max= 604, avg=588.00, stdev=22.63, samples=2 00:11:58.773 write: IOPS=623, BW=78.0MiB/s (81.7MB/s)(80.4MiB/1031msec); 0 zone resets 00:11:58.773 slat (usec): min=8, max=3008, avg=33.61, stdev=135.31 00:11:58.773 clat (usec): min=1701, max=75237, avg=43513.06, stdev=11452.97 00:11:58.773 lat (usec): min=1711, max=75249, avg=43546.67, stdev=11453.79 00:11:58.773 clat percentiles (usec): 00:11:58.773 | 1.00th=[ 8094], 5.00th=[22152], 10.00th=[30278], 20.00th=[37487], 00:11:58.773 | 30.00th=[40633], 40.00th=[42730], 50.00th=[43779], 60.00th=[45351], 00:11:58.773 | 70.00th=[47449], 80.00th=[50070], 90.00th=[57410], 95.00th=[63177], 00:11:58.773 | 99.00th=[72877], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:11:58.773 | 99.99th=[74974] 00:11:58.773 bw ( KiB/s): min=76544, max=80640, per=6.28%, avg=78592.00, stdev=2896.31, samples=2 00:11:58.773 iops : min= 598, max= 630, avg=614.00, stdev=22.63, samples=2 00:11:58.773 lat (usec) : 500=0.08% 00:11:58.773 lat (msec) : 2=0.08%, 4=0.32%, 10=41.12%, 20=6.81%, 50=40.71% 00:11:58.773 lat (msec) : 100=10.87% 00:11:58.773 cpu : usr=1.17%, sys=1.46%, ctx=1010, majf=0, minf=1 00:11:58.773 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.5%, >=64=0.0% 00:11:58.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.773 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:58.773 issued rwts: total=590,643,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.773 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:58.773 job2: (groupid=0, jobs=1): err= 0: pid=68117: Thu Jul 25 17:02:51 2024 00:11:58.773 read: IOPS=627, BW=78.5MiB/s (82.3MB/s)(81.1MiB/1034msec) 00:11:58.773 slat (usec): min=6, max=1013, avg=21.76, stdev=55.86 00:11:58.773 clat (usec): min=1554, max=41319, avg=8622.59, stdev=4966.99 00:11:58.773 lat (usec): min=1563, max=41327, avg=8644.35, stdev=4962.92 00:11:58.773 clat percentiles (usec): 00:11:58.773 | 1.00th=[ 2606], 5.00th=[ 4817], 10.00th=[ 5800], 20.00th=[ 6194], 00:11:58.773 | 30.00th=[ 6456], 40.00th=[ 6718], 50.00th=[ 7046], 60.00th=[ 7373], 00:11:58.773 | 70.00th=[ 8160], 80.00th=[ 9896], 90.00th=[14091], 95.00th=[17957], 00:11:58.773 | 99.00th=[31065], 99.50th=[33817], 99.90th=[41157], 99.95th=[41157], 00:11:58.773 | 99.99th=[41157] 00:11:58.773 bw ( KiB/s): min=70095, max=94720, per=6.73%, avg=82407.50, stdev=17412.50, samples=2 00:11:58.773 iops : min= 547, max= 740, avg=643.50, stdev=136.47, samples=2 00:11:58.773 write: IOPS=639, BW=79.9MiB/s (83.8MB/s)(82.6MiB/1034msec); 0 zone resets 00:11:58.773 slat (usec): min=7, max=832, avg=27.18, stdev=48.13 00:11:58.773 clat (usec): min=5518, max=78291, avg=41470.12, stdev=9514.43 00:11:58.773 lat (usec): min=5572, max=78317, avg=41497.30, stdev=9511.29 00:11:58.773 clat percentiles (usec): 00:11:58.773 | 1.00th=[ 9765], 5.00th=[24511], 10.00th=[28705], 20.00th=[36439], 00:11:58.773 | 30.00th=[40109], 40.00th=[41681], 50.00th=[43254], 60.00th=[44303], 00:11:58.773 | 70.00th=[45876], 80.00th=[47449], 90.00th=[49546], 95.00th=[53216], 00:11:58.773 | 99.00th=[68682], 99.50th=[73925], 99.90th=[78119], 99.95th=[78119], 00:11:58.773 | 99.99th=[78119] 00:11:58.773 bw ( KiB/s): min=77968, max=83456, per=6.45%, avg=80712.00, stdev=3880.60, samples=2 00:11:58.773 iops : min= 609, max= 652, avg=630.50, stdev=30.41, samples=2 00:11:58.773 lat (msec) : 2=0.15%, 4=1.91%, 10=38.47%, 20=9.01%, 50=45.80% 00:11:58.773 lat (msec) : 100=4.66% 00:11:58.773 cpu : usr=0.48%, sys=2.52%, ctx=1065, majf=0, minf=1 00:11:58.773 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=97.6%, >=64=0.0% 00:11:58.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.773 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:58.773 issued rwts: total=649,661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.773 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:58.773 job3: (groupid=0, jobs=1): err= 0: pid=68118: Thu Jul 25 17:02:51 2024 00:11:58.773 read: IOPS=601, BW=75.2MiB/s (78.8MB/s)(77.1MiB/1026msec) 00:11:58.773 slat (usec): min=6, max=626, avg=18.75, stdev=34.72 00:11:58.773 clat (usec): min=1917, max=34603, avg=8073.99, stdev=4276.82 00:11:58.773 lat (usec): min=1925, max=34653, avg=8092.74, stdev=4275.69 00:11:58.773 clat percentiles (usec): 00:11:58.773 | 1.00th=[ 3818], 5.00th=[ 5604], 10.00th=[ 5932], 20.00th=[ 6194], 00:11:58.773 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6783], 60.00th=[ 6980], 00:11:58.773 | 70.00th=[ 7504], 80.00th=[ 8455], 90.00th=[10945], 95.00th=[14484], 00:11:58.773 | 99.00th=[27919], 99.50th=[28967], 99.90th=[34866], 99.95th=[34866], 00:11:58.773 | 99.99th=[34866] 00:11:58.773 bw ( KiB/s): min=76544, max=79775, per=6.39%, avg=78159.50, stdev=2284.66, samples=2 00:11:58.773 iops : min= 598, max= 623, avg=610.50, stdev=17.68, samples=2 00:11:58.773 write: IOPS=628, BW=78.6MiB/s (82.4MB/s)(80.6MiB/1026msec); 0 zone resets 00:11:58.773 slat (usec): min=8, max=1762, avg=31.16, stdev=89.72 00:11:58.773 clat (usec): min=9756, max=67063, avg=43039.77, stdev=8176.49 00:11:58.773 lat (usec): min=9782, max=67085, avg=43070.94, stdev=8177.52 00:11:58.773 clat percentiles (usec): 00:11:58.773 | 1.00th=[15533], 5.00th=[27919], 10.00th=[32900], 20.00th=[38536], 00:11:58.773 | 30.00th=[41157], 40.00th=[42206], 50.00th=[43779], 60.00th=[44827], 00:11:58.773 | 70.00th=[46400], 80.00th=[47449], 90.00th=[53216], 95.00th=[56886], 00:11:58.773 | 99.00th=[61080], 99.50th=[65274], 99.90th=[66847], 99.95th=[66847], 00:11:58.773 | 99.99th=[66847] 00:11:58.773 bw ( KiB/s): min=78492, max=79872, per=6.32%, avg=79182.00, stdev=975.81, samples=2 00:11:58.773 iops : min= 613, max= 624, avg=618.50, stdev= 7.78, samples=2 00:11:58.773 lat (msec) : 2=0.16%, 4=0.40%, 10=41.68%, 20=5.55%, 50=45.25% 00:11:58.773 lat (msec) : 100=6.97% 00:11:58.773 cpu : usr=0.78%, sys=2.05%, ctx=1080, majf=0, minf=1 00:11:58.773 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.5%, >=64=0.0% 00:11:58.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.773 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:58.773 issued rwts: total=617,645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.773 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:58.773 job4: (groupid=0, jobs=1): err= 0: pid=68136: Thu Jul 25 17:02:51 2024 00:11:58.773 read: IOPS=608, BW=76.0MiB/s (79.7MB/s)(79.1MiB/1041msec) 00:11:58.773 slat (usec): min=6, max=482, avg=20.59, stdev=35.74 00:11:58.773 clat (usec): min=599, max=42130, avg=8456.60, stdev=4514.65 00:11:58.773 lat (usec): min=619, max=42177, avg=8477.19, stdev=4514.91 00:11:58.773 clat percentiles (usec): 00:11:58.773 | 1.00th=[ 5014], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 6128], 00:11:58.773 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6915], 60.00th=[ 7242], 00:11:58.773 | 70.00th=[ 7963], 80.00th=[ 9634], 90.00th=[12911], 95.00th=[16909], 00:11:58.773 | 99.00th=[27919], 99.50th=[31327], 99.90th=[42206], 99.95th=[42206], 00:11:58.773 | 99.99th=[42206] 00:11:58.773 bw ( KiB/s): min=80640, max=80896, per=6.60%, avg=80768.00, stdev=181.02, samples=2 00:11:58.773 iops : min= 630, max= 632, avg=631.00, stdev= 1.41, samples=2 00:11:58.773 write: IOPS=617, BW=77.2MiB/s (81.0MB/s)(80.4MiB/1041msec); 0 zone resets 00:11:58.773 slat (usec): min=9, max=9768, avg=43.28, stdev=385.86 00:11:58.773 clat (usec): min=978, max=82029, avg=42842.43, stdev=10642.92 00:11:58.774 lat (usec): min=2831, max=82098, avg=42885.71, stdev=10591.02 00:11:58.774 clat percentiles (usec): 00:11:58.774 | 1.00th=[ 4555], 5.00th=[28967], 10.00th=[33424], 20.00th=[36963], 00:11:58.774 | 30.00th=[39584], 40.00th=[41157], 50.00th=[42206], 60.00th=[43779], 00:11:58.774 | 70.00th=[46400], 80.00th=[48497], 90.00th=[52691], 95.00th=[63177], 00:11:58.774 | 99.00th=[73925], 99.50th=[79168], 99.90th=[82314], 99.95th=[82314], 00:11:58.774 | 99.99th=[82314] 00:11:58.774 bw ( KiB/s): min=74496, max=82688, per=6.28%, avg=78592.00, stdev=5792.62, samples=2 00:11:58.774 iops : min= 582, max= 646, avg=614.00, stdev=45.25, samples=2 00:11:58.774 lat (usec) : 750=0.16%, 1000=0.08% 00:11:58.774 lat (msec) : 4=0.63%, 10=40.67%, 20=7.99%, 50=43.18%, 100=7.29% 00:11:58.774 cpu : usr=1.15%, sys=1.83%, ctx=1075, majf=0, minf=1 00:11:58.774 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.6%, >=64=0.0% 00:11:58.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.774 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:58.774 issued rwts: total=633,643,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.774 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:58.774 job5: (groupid=0, jobs=1): err= 0: pid=68140: Thu Jul 25 17:02:51 2024 00:11:58.774 read: IOPS=612, BW=76.6MiB/s (80.3MB/s)(78.9MiB/1030msec) 00:11:58.774 slat (usec): min=6, max=8634, avg=37.45, stdev=356.24 00:11:58.774 clat (usec): min=165, max=33032, avg=8393.35, stdev=4660.41 00:11:58.774 lat (usec): min=3113, max=33044, avg=8430.81, stdev=4643.51 00:11:58.774 clat percentiles (usec): 00:11:58.774 | 1.00th=[ 4228], 5.00th=[ 5735], 10.00th=[ 5997], 20.00th=[ 6325], 00:11:58.774 | 30.00th=[ 6456], 40.00th=[ 6718], 50.00th=[ 6980], 60.00th=[ 7308], 00:11:58.774 | 70.00th=[ 7767], 80.00th=[ 8979], 90.00th=[10945], 95.00th=[17695], 00:11:58.774 | 99.00th=[30278], 99.50th=[32637], 99.90th=[33162], 99.95th=[33162], 00:11:58.774 | 99.99th=[33162] 00:11:58.774 bw ( KiB/s): min=75008, max=85248, per=6.55%, avg=80128.00, stdev=7240.77, samples=2 00:11:58.774 iops : min= 586, max= 666, avg=626.00, stdev=56.57, samples=2 00:11:58.774 write: IOPS=613, BW=76.7MiB/s (80.4MB/s)(79.0MiB/1030msec); 0 zone resets 00:11:58.774 slat (usec): min=9, max=871, avg=25.95, stdev=48.10 00:11:58.774 clat (usec): min=6224, max=79717, avg=43608.46, stdev=9474.27 00:11:58.774 lat (usec): min=6260, max=79733, avg=43634.41, stdev=9477.77 00:11:58.774 clat percentiles (usec): 00:11:58.774 | 1.00th=[12780], 5.00th=[30802], 10.00th=[35390], 20.00th=[38011], 00:11:58.774 | 30.00th=[39584], 40.00th=[41157], 50.00th=[42730], 60.00th=[44303], 00:11:58.774 | 70.00th=[46400], 80.00th=[49021], 90.00th=[54264], 95.00th=[62653], 00:11:58.774 | 99.00th=[69731], 99.50th=[76022], 99.90th=[80217], 99.95th=[80217], 00:11:58.774 | 99.99th=[80217] 00:11:58.774 bw ( KiB/s): min=74240, max=80896, per=6.19%, avg=77568.00, stdev=4706.50, samples=2 00:11:58.774 iops : min= 580, max= 632, avg=606.00, stdev=36.77, samples=2 00:11:58.774 lat (usec) : 250=0.08% 00:11:58.774 lat (msec) : 4=0.32%, 10=42.83%, 20=5.46%, 50=42.60%, 100=8.71% 00:11:58.774 cpu : usr=1.55%, sys=1.36%, ctx=1052, majf=0, minf=1 00:11:58.774 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.5%, >=64=0.0% 00:11:58.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.774 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:58.774 issued rwts: total=631,632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.774 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:58.774 job6: (groupid=0, jobs=1): err= 0: pid=68149: Thu Jul 25 17:02:51 2024 00:11:58.774 read: IOPS=600, BW=75.1MiB/s (78.7MB/s)(77.6MiB/1034msec) 00:11:58.774 slat (usec): min=6, max=493, avg=20.06, stdev=36.32 00:11:58.774 clat (usec): min=495, max=38308, avg=8170.82, stdev=4895.53 00:11:58.774 lat (usec): min=507, max=38318, avg=8190.88, stdev=4894.35 00:11:58.774 clat percentiles (usec): 00:11:58.774 | 1.00th=[ 1795], 5.00th=[ 5538], 10.00th=[ 5932], 20.00th=[ 6259], 00:11:58.774 | 30.00th=[ 6456], 40.00th=[ 6652], 50.00th=[ 6915], 60.00th=[ 7177], 00:11:58.774 | 70.00th=[ 7570], 80.00th=[ 8291], 90.00th=[10028], 95.00th=[17171], 00:11:58.774 | 99.00th=[31327], 99.50th=[32113], 99.90th=[38536], 99.95th=[38536], 00:11:58.774 | 99.99th=[38536] 00:11:58.774 bw ( KiB/s): min=76288, max=81920, per=6.46%, avg=79104.00, stdev=3982.43, samples=2 00:11:58.774 iops : min= 596, max= 640, avg=618.00, stdev=31.11, samples=2 00:11:58.774 write: IOPS=606, BW=75.8MiB/s (79.5MB/s)(78.4MiB/1034msec); 0 zone resets 00:11:58.774 slat (usec): min=8, max=9245, avg=47.60, stdev=381.62 00:11:58.774 clat (usec): min=1619, max=77480, avg=44505.91, stdev=10446.80 00:11:58.774 lat (usec): min=1651, max=77506, avg=44553.51, stdev=10439.66 00:11:58.774 clat percentiles (usec): 00:11:58.774 | 1.00th=[13042], 5.00th=[30278], 10.00th=[33817], 20.00th=[38536], 00:11:58.774 | 30.00th=[40633], 40.00th=[41681], 50.00th=[43254], 60.00th=[44827], 00:11:58.774 | 70.00th=[46400], 80.00th=[50594], 90.00th=[59507], 95.00th=[64750], 00:11:58.774 | 99.00th=[68682], 99.50th=[69731], 99.90th=[77071], 99.95th=[77071], 00:11:58.774 | 99.99th=[77071] 00:11:58.774 bw ( KiB/s): min=74752, max=78592, per=6.12%, avg=76672.00, stdev=2715.29, samples=2 00:11:58.774 iops : min= 584, max= 614, avg=599.00, stdev=21.21, samples=2 00:11:58.774 lat (usec) : 500=0.08%, 750=0.16%, 1000=0.08% 00:11:58.774 lat (msec) : 2=0.40%, 4=0.32%, 10=44.15%, 20=3.29%, 50=40.79% 00:11:58.774 lat (msec) : 100=10.74% 00:11:58.774 cpu : usr=0.29%, sys=2.42%, ctx=1054, majf=0, minf=1 00:11:58.774 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.5%, >=64=0.0% 00:11:58.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.774 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:58.774 issued rwts: total=621,627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.774 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:58.774 job7: (groupid=0, jobs=1): err= 0: pid=68206: Thu Jul 25 17:02:51 2024 00:11:58.774 read: IOPS=529, BW=66.2MiB/s (69.4MB/s)(68.1MiB/1029msec) 00:11:58.774 slat (usec): min=6, max=883, avg=19.12, stdev=41.30 00:11:58.774 clat (usec): min=2498, max=37186, avg=8300.43, stdev=4762.69 00:11:58.774 lat (usec): min=2508, max=37197, avg=8319.55, stdev=4760.21 00:11:58.774 clat percentiles (usec): 00:11:58.774 | 1.00th=[ 4817], 5.00th=[ 5735], 10.00th=[ 5997], 20.00th=[ 6259], 00:11:58.774 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6783], 60.00th=[ 6980], 00:11:58.774 | 70.00th=[ 7439], 80.00th=[ 8455], 90.00th=[10552], 95.00th=[19530], 00:11:58.774 | 99.00th=[33817], 99.50th=[34866], 99.90th=[36963], 99.95th=[36963], 00:11:58.774 | 99.99th=[36963] 00:11:58.774 bw ( KiB/s): min=62730, max=75927, per=5.66%, avg=69328.50, stdev=9331.69, samples=2 00:11:58.774 iops : min= 490, max= 593, avg=541.50, stdev=72.83, samples=2 00:11:58.774 write: IOPS=601, BW=75.2MiB/s (78.8MB/s)(77.4MiB/1029msec); 0 zone resets 00:11:58.774 slat (usec): min=8, max=1301, avg=30.21, stdev=66.67 00:11:58.774 clat (usec): min=12619, max=79845, avg=45739.24, stdev=8374.81 00:11:58.774 lat (usec): min=12648, max=79861, avg=45769.44, stdev=8372.37 00:11:58.774 clat percentiles (usec): 00:11:58.774 | 1.00th=[24773], 5.00th=[34341], 10.00th=[36963], 20.00th=[39584], 00:11:58.774 | 30.00th=[41681], 40.00th=[43254], 50.00th=[44827], 60.00th=[46400], 00:11:58.774 | 70.00th=[48497], 80.00th=[51643], 90.00th=[56886], 95.00th=[60556], 00:11:58.774 | 99.00th=[69731], 99.50th=[71828], 99.90th=[80217], 99.95th=[80217], 00:11:58.774 | 99.99th=[80217] 00:11:58.774 bw ( KiB/s): min=75174, max=75414, per=6.01%, avg=75294.00, stdev=169.71, samples=2 00:11:58.774 iops : min= 587, max= 589, avg=588.00, stdev= 1.41, samples=2 00:11:58.774 lat (msec) : 4=0.17%, 10=41.58%, 20=3.09%, 50=41.67%, 100=13.49% 00:11:58.774 cpu : usr=0.97%, sys=1.75%, ctx=952, majf=0, minf=1 00:11:58.774 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=97.3%, >=64=0.0% 00:11:58.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.774 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:58.774 issued rwts: total=545,619,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.774 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:58.774 job8: (groupid=0, jobs=1): err= 0: pid=68224: Thu Jul 25 17:02:51 2024 00:11:58.774 read: IOPS=536, BW=67.1MiB/s (70.4MB/s)(69.4MiB/1034msec) 00:11:58.774 slat (usec): min=6, max=1041, avg=21.15, stdev=57.75 00:11:58.774 clat (usec): min=3477, max=39057, avg=8653.88, stdev=5027.24 00:11:58.774 lat (usec): min=3516, max=39079, avg=8675.03, stdev=5027.59 00:11:58.774 clat percentiles (usec): 00:11:58.774 | 1.00th=[ 4883], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6128], 00:11:58.774 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6718], 60.00th=[ 7177], 00:11:58.774 | 70.00th=[ 8160], 80.00th=[ 9896], 90.00th=[12911], 95.00th=[17957], 00:11:58.774 | 99.00th=[31065], 99.50th=[35390], 99.90th=[39060], 99.95th=[39060], 00:11:58.774 | 99.99th=[39060] 00:11:58.774 bw ( KiB/s): min=69888, max=71424, per=5.77%, avg=70656.00, stdev=1086.12, samples=2 00:11:58.774 iops : min= 546, max= 558, avg=552.00, stdev= 8.49, samples=2 00:11:58.774 write: IOPS=582, BW=72.8MiB/s (76.3MB/s)(75.2MiB/1034msec); 0 zone resets 00:11:58.774 slat (usec): min=8, max=22008, avg=69.97, stdev=901.97 00:11:58.774 clat (usec): min=1441, max=73396, avg=46674.84, stdev=11237.85 00:11:58.774 lat (usec): min=3495, max=73419, avg=46744.80, stdev=11206.05 00:11:58.774 clat percentiles (usec): 00:11:58.774 | 1.00th=[12911], 5.00th=[32113], 10.00th=[36963], 20.00th=[40109], 00:11:58.774 | 30.00th=[42206], 40.00th=[43779], 50.00th=[45876], 60.00th=[47973], 00:11:58.774 | 70.00th=[50594], 80.00th=[54789], 90.00th=[61604], 95.00th=[66323], 00:11:58.774 | 99.00th=[71828], 99.50th=[71828], 99.90th=[72877], 99.95th=[72877], 00:11:58.774 | 99.99th=[72877] 00:11:58.774 bw ( KiB/s): min=73216, max=73472, per=5.86%, avg=73344.00, stdev=181.02, samples=2 00:11:58.774 iops : min= 572, max= 574, avg=573.00, stdev= 1.41, samples=2 00:11:58.774 lat (msec) : 2=0.09%, 4=0.52%, 10=38.38%, 20=8.64%, 50=36.04% 00:11:58.774 lat (msec) : 100=16.34% 00:11:58.774 cpu : usr=0.97%, sys=1.65%, ctx=966, majf=0, minf=1 00:11:58.774 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=97.3%, >=64=0.0% 00:11:58.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.774 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:58.774 issued rwts: total=555,602,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.774 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:58.774 job9: (groupid=0, jobs=1): err= 0: pid=68226: Thu Jul 25 17:02:51 2024 00:11:58.774 read: IOPS=704, BW=88.0MiB/s (92.3MB/s)(89.9MiB/1021msec) 00:11:58.774 slat (usec): min=6, max=394, avg=16.64, stdev=28.56 00:11:58.775 clat (usec): min=1719, max=32537, avg=8154.05, stdev=4149.99 00:11:58.775 lat (usec): min=1727, max=32546, avg=8170.69, stdev=4148.58 00:11:58.775 clat percentiles (usec): 00:11:58.775 | 1.00th=[ 4113], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6259], 00:11:58.775 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 7242], 00:11:58.775 | 70.00th=[ 7701], 80.00th=[ 8717], 90.00th=[10159], 95.00th=[18220], 00:11:58.775 | 99.00th=[28181], 99.50th=[32113], 99.90th=[32637], 99.95th=[32637], 00:11:58.775 | 99.99th=[32637] 00:11:58.775 bw ( KiB/s): min=87284, max=94720, per=7.43%, avg=91002.00, stdev=5258.05, samples=2 00:11:58.775 iops : min= 681, max= 740, avg=710.50, stdev=41.72, samples=2 00:11:58.775 write: IOPS=657, BW=82.1MiB/s (86.1MB/s)(83.9MiB/1021msec); 0 zone resets 00:11:58.775 slat (usec): min=7, max=874, avg=23.64, stdev=45.84 00:11:58.775 clat (usec): min=8656, max=65285, avg=39845.32, stdev=9213.54 00:11:58.775 lat (usec): min=8675, max=65299, avg=39868.96, stdev=9212.99 00:11:58.775 clat percentiles (usec): 00:11:58.775 | 1.00th=[13435], 5.00th=[23462], 10.00th=[28181], 20.00th=[32900], 00:11:58.775 | 30.00th=[36963], 40.00th=[39060], 50.00th=[40633], 60.00th=[42206], 00:11:58.775 | 70.00th=[43779], 80.00th=[45876], 90.00th=[50594], 95.00th=[55837], 00:11:58.775 | 99.00th=[61604], 99.50th=[63701], 99.90th=[65274], 99.95th=[65274], 00:11:58.775 | 99.99th=[65274] 00:11:58.775 bw ( KiB/s): min=81664, max=82958, per=6.57%, avg=82311.00, stdev=915.00, samples=2 00:11:58.775 iops : min= 638, max= 648, avg=643.00, stdev= 7.07, samples=2 00:11:58.775 lat (msec) : 2=0.14%, 4=0.29%, 10=45.61%, 20=4.96%, 50=43.88% 00:11:58.775 lat (msec) : 100=5.11% 00:11:58.775 cpu : usr=0.69%, sys=2.16%, ctx=1133, majf=0, minf=1 00:11:58.775 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=97.8%, >=64=0.0% 00:11:58.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.775 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:58.775 issued rwts: total=719,671,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.775 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:58.775 job10: (groupid=0, jobs=1): err= 0: pid=68227: Thu Jul 25 17:02:51 2024 00:11:58.775 read: IOPS=626, BW=78.3MiB/s (82.1MB/s)(81.1MiB/1036msec) 00:11:58.775 slat (usec): min=6, max=3207, avg=30.46, stdev=183.86 00:11:58.775 clat (usec): min=1276, max=38220, avg=8784.97, stdev=5643.22 00:11:58.775 lat (usec): min=1285, max=38230, avg=8815.43, stdev=5635.46 00:11:58.775 clat percentiles (usec): 00:11:58.775 | 1.00th=[ 2999], 5.00th=[ 4555], 10.00th=[ 5800], 20.00th=[ 6194], 00:11:58.775 | 30.00th=[ 6456], 40.00th=[ 6718], 50.00th=[ 7046], 60.00th=[ 7373], 00:11:58.775 | 70.00th=[ 8291], 80.00th=[ 9896], 90.00th=[12649], 95.00th=[21890], 00:11:58.775 | 99.00th=[35914], 99.50th=[36439], 99.90th=[38011], 99.95th=[38011], 00:11:58.775 | 99.99th=[38011] 00:11:58.775 bw ( KiB/s): min=75383, max=88832, per=6.71%, avg=82107.50, stdev=9509.88, samples=2 00:11:58.775 iops : min= 588, max= 694, avg=641.00, stdev=74.95, samples=2 00:11:58.775 write: IOPS=612, BW=76.6MiB/s (80.3MB/s)(79.4MiB/1036msec); 0 zone resets 00:11:58.775 slat (usec): min=9, max=1463, avg=30.65, stdev=78.98 00:11:58.775 clat (usec): min=5644, max=75673, avg=43077.66, stdev=9738.49 00:11:58.775 lat (usec): min=5679, max=75699, avg=43108.31, stdev=9743.37 00:11:58.775 clat percentiles (usec): 00:11:58.775 | 1.00th=[12387], 5.00th=[26084], 10.00th=[32375], 20.00th=[38011], 00:11:58.775 | 30.00th=[40633], 40.00th=[42206], 50.00th=[43254], 60.00th=[44303], 00:11:58.775 | 70.00th=[45876], 80.00th=[47973], 90.00th=[52691], 95.00th=[62653], 00:11:58.775 | 99.00th=[68682], 99.50th=[69731], 99.90th=[76022], 99.95th=[76022], 00:11:58.775 | 99.99th=[76022] 00:11:58.775 bw ( KiB/s): min=77056, max=77660, per=6.18%, avg=77358.00, stdev=427.09, samples=2 00:11:58.775 iops : min= 602, max= 606, avg=604.00, stdev= 2.83, samples=2 00:11:58.775 lat (msec) : 2=0.31%, 4=1.56%, 10=39.88%, 20=6.85%, 50=44.47% 00:11:58.775 lat (msec) : 100=6.93% 00:11:58.775 cpu : usr=0.48%, sys=2.51%, ctx=1034, majf=0, minf=1 00:11:58.775 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=97.6%, >=64=0.0% 00:11:58.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.775 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:58.775 issued rwts: total=649,635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.775 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:58.775 job11: (groupid=0, jobs=1): err= 0: pid=68229: Thu Jul 25 17:02:51 2024 00:11:58.775 read: IOPS=605, BW=75.7MiB/s (79.4MB/s)(77.8MiB/1027msec) 00:11:58.775 slat (usec): min=6, max=648, avg=19.91, stdev=40.55 00:11:58.775 clat (usec): min=2852, max=31826, avg=7864.73, stdev=4044.33 00:11:58.775 lat (usec): min=2861, max=31846, avg=7884.64, stdev=4043.13 00:11:58.775 clat percentiles (usec): 00:11:58.775 | 1.00th=[ 4080], 5.00th=[ 5735], 10.00th=[ 5997], 20.00th=[ 6194], 00:11:58.775 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6783], 60.00th=[ 6980], 00:11:58.775 | 70.00th=[ 7308], 80.00th=[ 8094], 90.00th=[ 9896], 95.00th=[13566], 00:11:58.775 | 99.00th=[28967], 99.50th=[29492], 99.90th=[31851], 99.95th=[31851], 00:11:58.775 | 99.99th=[31851] 00:11:58.775 bw ( KiB/s): min=74752, max=83200, per=6.45%, avg=78976.00, stdev=5973.64, samples=2 00:11:58.775 iops : min= 584, max= 650, avg=617.00, stdev=46.67, samples=2 00:11:58.775 write: IOPS=594, BW=74.4MiB/s (78.0MB/s)(76.4MiB/1027msec); 0 zone resets 00:11:58.775 slat (usec): min=10, max=951, avg=30.65, stdev=66.65 00:11:58.775 clat (usec): min=10354, max=69417, avg=45653.35, stdev=9249.34 00:11:58.775 lat (usec): min=10384, max=69446, avg=45684.00, stdev=9249.72 00:11:58.775 clat percentiles (usec): 00:11:58.775 | 1.00th=[23200], 5.00th=[31327], 10.00th=[35914], 20.00th=[39060], 00:11:58.775 | 30.00th=[41157], 40.00th=[42730], 50.00th=[44303], 60.00th=[45876], 00:11:58.775 | 70.00th=[49021], 80.00th=[53216], 90.00th=[60031], 95.00th=[62653], 00:11:58.775 | 99.00th=[65274], 99.50th=[66847], 99.90th=[69731], 99.95th=[69731], 00:11:58.775 | 99.99th=[69731] 00:11:58.775 bw ( KiB/s): min=71424, max=78080, per=5.97%, avg=74752.00, stdev=4706.50, samples=2 00:11:58.775 iops : min= 558, max= 610, avg=584.00, stdev=36.77, samples=2 00:11:58.775 lat (msec) : 4=0.32%, 10=45.34%, 20=3.24%, 50=37.63%, 100=13.46% 00:11:58.775 cpu : usr=1.07%, sys=1.75%, ctx=1042, majf=0, minf=1 00:11:58.775 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.5%, >=64=0.0% 00:11:58.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.775 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:58.775 issued rwts: total=622,611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.775 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:58.775 job12: (groupid=0, jobs=1): err= 0: pid=68230: Thu Jul 25 17:02:51 2024 00:11:58.775 read: IOPS=650, BW=81.3MiB/s (85.2MB/s)(83.2MiB/1024msec) 00:11:58.775 slat (usec): min=6, max=8291, avg=38.15, stdev=359.31 00:11:58.775 clat (usec): min=184, max=32221, avg=8263.63, stdev=4193.94 00:11:58.775 lat (usec): min=3993, max=32244, avg=8301.78, stdev=4183.30 00:11:58.775 clat percentiles (usec): 00:11:58.775 | 1.00th=[ 4424], 5.00th=[ 5538], 10.00th=[ 5800], 20.00th=[ 6063], 00:11:58.775 | 30.00th=[ 6325], 40.00th=[ 6587], 50.00th=[ 6783], 60.00th=[ 7177], 00:11:58.775 | 70.00th=[ 7767], 80.00th=[ 9503], 90.00th=[11863], 95.00th=[16450], 00:11:58.775 | 99.00th=[27919], 99.50th=[28181], 99.90th=[32113], 99.95th=[32113], 00:11:58.775 | 99.99th=[32113] 00:11:58.775 bw ( KiB/s): min=74240, max=94909, per=6.91%, avg=84574.50, stdev=14615.19, samples=2 00:11:58.775 iops : min= 580, max= 741, avg=660.50, stdev=113.84, samples=2 00:11:58.775 write: IOPS=614, BW=76.8MiB/s (80.5MB/s)(78.6MiB/1024msec); 0 zone resets 00:11:58.775 slat (usec): min=10, max=5174, avg=34.48, stdev=209.21 00:11:58.775 clat (usec): min=4743, max=67291, avg=43012.24, stdev=8311.83 00:11:58.775 lat (usec): min=4774, max=67337, avg=43046.72, stdev=8314.01 00:11:58.775 clat percentiles (usec): 00:11:58.775 | 1.00th=[18220], 5.00th=[28443], 10.00th=[33162], 20.00th=[38011], 00:11:58.775 | 30.00th=[40109], 40.00th=[41681], 50.00th=[42730], 60.00th=[44303], 00:11:58.775 | 70.00th=[45876], 80.00th=[48497], 90.00th=[53216], 95.00th=[55837], 00:11:58.775 | 99.00th=[63701], 99.50th=[64226], 99.90th=[67634], 99.95th=[67634], 00:11:58.775 | 99.99th=[67634] 00:11:58.775 bw ( KiB/s): min=75927, max=78848, per=6.18%, avg=77387.50, stdev=2065.46, samples=2 00:11:58.775 iops : min= 593, max= 616, avg=604.50, stdev=16.26, samples=2 00:11:58.775 lat (usec) : 250=0.08% 00:11:58.775 lat (msec) : 4=0.08%, 10=42.16%, 20=7.72%, 50=41.62%, 100=8.34% 00:11:58.775 cpu : usr=1.08%, sys=1.86%, ctx=1030, majf=0, minf=1 00:11:58.775 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=97.6%, >=64=0.0% 00:11:58.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.775 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:58.775 issued rwts: total=666,629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.775 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:58.775 job13: (groupid=0, jobs=1): err= 0: pid=68231: Thu Jul 25 17:02:51 2024 00:11:58.775 read: IOPS=579, BW=72.4MiB/s (75.9MB/s)(74.1MiB/1024msec) 00:11:58.775 slat (usec): min=5, max=1454, avg=23.98, stdev=80.95 00:11:58.775 clat (usec): min=1739, max=32574, avg=8428.81, stdev=4608.63 00:11:58.775 lat (usec): min=1831, max=32584, avg=8452.79, stdev=4601.97 00:11:58.775 clat percentiles (usec): 00:11:58.775 | 1.00th=[ 3589], 5.00th=[ 5669], 10.00th=[ 5932], 20.00th=[ 6194], 00:11:58.775 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6915], 60.00th=[ 7177], 00:11:58.775 | 70.00th=[ 7767], 80.00th=[ 8848], 90.00th=[11994], 95.00th=[18220], 00:11:58.775 | 99.00th=[29230], 99.50th=[32637], 99.90th=[32637], 99.95th=[32637], 00:11:58.775 | 99.99th=[32637] 00:11:58.775 bw ( KiB/s): min=68096, max=82432, per=6.15%, avg=75264.00, stdev=10137.08, samples=2 00:11:58.775 iops : min= 532, max= 644, avg=588.00, stdev=79.20, samples=2 00:11:58.775 write: IOPS=586, BW=73.4MiB/s (76.9MB/s)(75.1MiB/1024msec); 0 zone resets 00:11:58.775 slat (usec): min=9, max=1692, avg=30.14, stdev=79.84 00:11:58.775 clat (usec): min=3626, max=67714, avg=45926.64, stdev=10005.28 00:11:58.775 lat (usec): min=3913, max=67742, avg=45956.78, stdev=9992.56 00:11:58.775 clat percentiles (usec): 00:11:58.775 | 1.00th=[17433], 5.00th=[29754], 10.00th=[35390], 20.00th=[39584], 00:11:58.775 | 30.00th=[41681], 40.00th=[44303], 50.00th=[45351], 60.00th=[47449], 00:11:58.775 | 70.00th=[49546], 80.00th=[52691], 90.00th=[61080], 95.00th=[63177], 00:11:58.775 | 99.00th=[65799], 99.50th=[66323], 99.90th=[67634], 99.95th=[67634], 00:11:58.775 | 99.99th=[67634] 00:11:58.775 bw ( KiB/s): min=71168, max=75776, per=5.87%, avg=73472.00, stdev=3258.35, samples=2 00:11:58.776 iops : min= 556, max= 592, avg=574.00, stdev=25.46, samples=2 00:11:58.776 lat (msec) : 2=0.17%, 4=0.67%, 10=41.96%, 20=5.61%, 50=37.52% 00:11:58.776 lat (msec) : 100=14.07% 00:11:58.776 cpu : usr=0.68%, sys=2.05%, ctx=1010, majf=0, minf=1 00:11:58.776 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.4%, >=64=0.0% 00:11:58.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.776 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:58.776 issued rwts: total=593,601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.776 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:58.776 job14: (groupid=0, jobs=1): err= 0: pid=68232: Thu Jul 25 17:02:51 2024 00:11:58.776 read: IOPS=578, BW=72.3MiB/s (75.8MB/s)(75.0MiB/1037msec) 00:11:58.776 slat (usec): min=6, max=195, avg=18.61, stdev=15.76 00:11:58.776 clat (usec): min=2007, max=41194, avg=8165.04, stdev=4430.19 00:11:58.776 lat (usec): min=2022, max=41226, avg=8183.65, stdev=4429.44 00:11:58.776 clat percentiles (usec): 00:11:58.776 | 1.00th=[ 2933], 5.00th=[ 5473], 10.00th=[ 5866], 20.00th=[ 6194], 00:11:58.776 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6849], 60.00th=[ 7177], 00:11:58.776 | 70.00th=[ 7635], 80.00th=[ 8586], 90.00th=[11469], 95.00th=[15139], 00:11:58.776 | 99.00th=[27657], 99.50th=[29492], 99.90th=[41157], 99.95th=[41157], 00:11:58.776 | 99.99th=[41157] 00:11:58.776 bw ( KiB/s): min=75636, max=76288, per=6.21%, avg=75962.00, stdev=461.03, samples=2 00:11:58.776 iops : min= 590, max= 596, avg=593.00, stdev= 4.24, samples=2 00:11:58.776 write: IOPS=609, BW=76.2MiB/s (79.9MB/s)(79.0MiB/1037msec); 0 zone resets 00:11:58.776 slat (usec): min=9, max=2116, avg=33.77, stdev=98.76 00:11:58.776 clat (usec): min=3241, max=76725, avg=44524.78, stdev=10839.68 00:11:58.776 lat (usec): min=3411, max=76741, avg=44558.55, stdev=10835.34 00:11:58.776 clat percentiles (usec): 00:11:58.776 | 1.00th=[11338], 5.00th=[28967], 10.00th=[33162], 20.00th=[38011], 00:11:58.776 | 30.00th=[40109], 40.00th=[42206], 50.00th=[43779], 60.00th=[45351], 00:11:58.776 | 70.00th=[47449], 80.00th=[50594], 90.00th=[58459], 95.00th=[66847], 00:11:58.776 | 99.00th=[71828], 99.50th=[72877], 99.90th=[77071], 99.95th=[77071], 00:11:58.776 | 99.99th=[77071] 00:11:58.776 bw ( KiB/s): min=71424, max=81960, per=6.12%, avg=76692.00, stdev=7450.08, samples=2 00:11:58.776 iops : min= 558, max= 640, avg=599.00, stdev=57.98, samples=2 00:11:58.776 lat (msec) : 4=1.54%, 10=40.34%, 20=6.41%, 50=40.91%, 100=10.80% 00:11:58.776 cpu : usr=0.77%, sys=2.22%, ctx=945, majf=0, minf=1 00:11:58.776 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.5%, >=64=0.0% 00:11:58.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.776 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:58.776 issued rwts: total=600,632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.776 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:58.776 job15: (groupid=0, jobs=1): err= 0: pid=68233: Thu Jul 25 17:02:51 2024 00:11:58.776 read: IOPS=600, BW=75.1MiB/s (78.7MB/s)(77.5MiB/1032msec) 00:11:58.776 slat (usec): min=6, max=15903, avg=48.38, stdev=640.24 00:11:58.776 clat (usec): min=1679, max=40314, avg=7736.38, stdev=4191.33 00:11:58.776 lat (usec): min=1691, max=40333, avg=7784.76, stdev=4225.59 00:11:58.776 clat percentiles (usec): 00:11:58.776 | 1.00th=[ 2999], 5.00th=[ 5407], 10.00th=[ 5800], 20.00th=[ 6128], 00:11:58.776 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6718], 60.00th=[ 6915], 00:11:58.776 | 70.00th=[ 7242], 80.00th=[ 8094], 90.00th=[ 9372], 95.00th=[11207], 00:11:58.776 | 99.00th=[29230], 99.50th=[30540], 99.90th=[40109], 99.95th=[40109], 00:11:58.776 | 99.99th=[40109] 00:11:58.776 bw ( KiB/s): min=73472, max=84905, per=6.47%, avg=79188.50, stdev=8084.35, samples=2 00:11:58.776 iops : min= 574, max= 663, avg=618.50, stdev=62.93, samples=2 00:11:58.776 write: IOPS=628, BW=78.6MiB/s (82.4MB/s)(81.1MiB/1032msec); 0 zone resets 00:11:58.776 slat (usec): min=9, max=602, avg=28.48, stdev=41.74 00:11:58.776 clat (usec): min=638, max=77316, avg=43346.69, stdev=12290.66 00:11:58.776 lat (usec): min=770, max=77351, avg=43375.17, stdev=12292.96 00:11:58.776 clat percentiles (usec): 00:11:58.776 | 1.00th=[ 2671], 5.00th=[20579], 10.00th=[28443], 20.00th=[36963], 00:11:58.776 | 30.00th=[40109], 40.00th=[41681], 50.00th=[43779], 60.00th=[45351], 00:11:58.776 | 70.00th=[47449], 80.00th=[50594], 90.00th=[60556], 95.00th=[64750], 00:11:58.776 | 99.00th=[68682], 99.50th=[72877], 99.90th=[77071], 99.95th=[77071], 00:11:58.776 | 99.99th=[77071] 00:11:58.776 bw ( KiB/s): min=75776, max=82853, per=6.33%, avg=79314.50, stdev=5004.19, samples=2 00:11:58.776 iops : min= 592, max= 647, avg=619.50, stdev=38.89, samples=2 00:11:58.776 lat (usec) : 750=0.08% 00:11:58.776 lat (msec) : 2=0.47%, 4=0.95%, 10=44.21%, 20=3.55%, 50=40.11% 00:11:58.776 lat (msec) : 100=10.64% 00:11:58.776 cpu : usr=0.78%, sys=2.23%, ctx=1038, majf=0, minf=1 00:11:58.776 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.6%, >=64=0.0% 00:11:58.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.776 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:58.776 issued rwts: total=620,649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.776 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:58.776 00:11:58.776 Run status group 0 (all jobs): 00:11:58.776 READ: bw=1195MiB/s (1253MB/s), 66.2MiB/s-88.0MiB/s (69.4MB/s-92.3MB/s), io=1244MiB (1305MB), run=1021-1041msec 00:11:58.776 WRITE: bw=1223MiB/s (1282MB/s), 72.8MiB/s-83.3MiB/s (76.3MB/s-87.3MB/s), io=1273MiB (1335MB), run=1021-1041msec 00:11:58.776 00:11:58.776 Disk stats (read/write): 00:11:58.776 sda: ios=643/617, merge=0/0, ticks=4616/24179, in_queue=28796, util=76.80% 00:11:58.776 sdb: ios=594/579, merge=0/0, ticks=4336/24075, in_queue=28412, util=77.36% 00:11:58.776 sdc: ios=665/592, merge=0/0, ticks=4927/23175, in_queue=28102, util=78.11% 00:11:58.776 sdd: ios=624/577, merge=0/0, ticks=4279/23952, in_queue=28232, util=77.46% 00:11:58.776 sde: ios=649/590, merge=0/0, ticks=4873/24152, in_queue=29025, util=79.01% 00:11:58.776 sdf: ios=644/570, merge=0/0, ticks=4709/23772, in_queue=28481, util=79.23% 00:11:58.776 sdg: ios=613/563, merge=0/0, ticks=4494/24078, in_queue=28573, util=81.90% 00:11:58.776 sdh: ios=513/552, merge=0/0, ticks=4090/24544, in_queue=28635, util=81.41% 00:11:58.776 sdi: ios=522/545, merge=0/0, ticks=3954/24551, in_queue=28506, util=81.35% 00:11:58.776 sdj: ios=657/604, merge=0/0, ticks=5117/22893, in_queue=28011, util=82.05% 00:11:58.776 sdk: ios=604/568, merge=0/0, ticks=4945/23335, in_queue=28281, util=83.30% 00:11:58.776 sdl: ios=575/540, merge=0/0, ticks=4284/24232, in_queue=28517, util=83.74% 00:11:58.776 sdm: ios=619/569, merge=0/0, ticks=4809/23423, in_queue=28233, util=84.32% 00:11:58.776 sdn: ios=561/533, merge=0/0, ticks=4495/24096, in_queue=28592, util=84.82% 00:11:58.776 sdo: ios=565/565, merge=0/0, ticks=4317/23861, in_queue=28179, util=85.90% 00:11:58.776 sdp: ios=581/583, merge=0/0, ticks=4348/24103, in_queue=28452, util=87.96% 00:11:58.776 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@82 -- # iscsicleanup 00:11:58.776 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:11:58.776 Cleaning up iSCSI connection 00:11:58.776 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:11:59.344 Logging out of session [sid: 14, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:11:59.344 Logging out of session [sid: 15, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:59.344 Logging out of session [sid: 16, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:11:59.344 Logging out of session [sid: 17, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:11:59.344 Logging out of session [sid: 18, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:11:59.344 Logging out of session [sid: 19, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:11:59.344 Logging out of session [sid: 20, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:11:59.344 Logging out of session [sid: 21, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:11:59.344 Logging out of session [sid: 22, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:11:59.344 Logging out of session [sid: 23, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:11:59.344 Logging out of session [sid: 24, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:11:59.344 Logging out of session [sid: 25, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:11:59.344 Logging out of session [sid: 26, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:11:59.344 Logging out of session [sid: 27, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:11:59.344 Logging out of session [sid: 28, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:11:59.344 Logging out of session [sid: 29, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:11:59.344 Logout of [sid: 14, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:11:59.344 Logout of [sid: 15, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:59.344 Logout of [sid: 16, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:11:59.344 Logout of [sid: 17, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:11:59.344 Logout of [sid: 18, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:11:59.344 Logout of [sid: 19, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:11:59.344 Logout of [sid: 20, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:11:59.344 Logout of [sid: 21, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:11:59.344 Logout of [sid: 22, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:11:59.344 Logout of [sid: 23, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:11:59.344 Logout of [sid: 24, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:11:59.344 Logout of [sid: 25, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:11:59.344 Logout of [sid: 26, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:11:59.344 Logout of [sid: 27, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:11:59.344 Logout of [sid: 28, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:11:59.344 Logout of [sid: 29, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:11:59.344 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@985 -- # rm -rf 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@84 -- # RPCS= 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # seq 0 15 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target0\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc0\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target1\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc1\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target2\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc2\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target3\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc3\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target4\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc4\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target5\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc5\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target6\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc6\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target7\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc7\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target8\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc8\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target9\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc9\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target10\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc10\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target11\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc11\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target12\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc12\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target13\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc13\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target14\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc14\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target15\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc15\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # echo -e iscsi_delete_target_node 'iqn.2016-06.io.spdk:Target0\nbdev_malloc_delete' 'Malloc0\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target1\nbdev_malloc_delete' 'Malloc1\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target2\nbdev_malloc_delete' 'Malloc2\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target3\nbdev_malloc_delete' 'Malloc3\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target4\nbdev_malloc_delete' 'Malloc4\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target5\nbdev_malloc_delete' 'Malloc5\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target6\nbdev_malloc_delete' 'Malloc6\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target7\nbdev_malloc_delete' 'Malloc7\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target8\nbdev_malloc_delete' 'Malloc8\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target9\nbdev_malloc_delete' 'Malloc9\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target10\nbdev_malloc_delete' 'Malloc10\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target11\nbdev_malloc_delete' 'Malloc11\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target12\nbdev_malloc_delete' 'Malloc12\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target13\nbdev_malloc_delete' 'Malloc13\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target14\nbdev_malloc_delete' 'Malloc14\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target15\nbdev_malloc_delete' 'Malloc15\n' 00:11:59.345 17:02:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:59.912 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@92 -- # trap 'delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:59.912 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@94 -- # killprocess 67709 00:11:59.912 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@950 -- # '[' -z 67709 ']' 00:11:59.912 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # kill -0 67709 00:11:59.912 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@955 -- # uname 00:11:59.912 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:00.170 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67709 00:12:00.170 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:00.170 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:00.170 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67709' 00:12:00.170 killing process with pid 67709 00:12:00.170 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@969 -- # kill 67709 00:12:00.170 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@974 -- # wait 67709 00:12:00.472 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@95 -- # killprocess 67744 00:12:00.472 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@950 -- # '[' -z 67744 ']' 00:12:00.472 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # kill -0 67744 00:12:00.472 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@955 -- # uname 00:12:00.472 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:00.472 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67744 00:12:00.472 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@956 -- # process_name=spdk_trace_reco 00:12:00.472 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@960 -- # '[' spdk_trace_reco = sudo ']' 00:12:00.472 killing process with pid 67744 00:12:00.472 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67744' 00:12:00.472 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@969 -- # kill 67744 00:12:00.472 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@974 -- # wait 67744 00:12:00.472 17:02:52 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@96 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace -f ./tmp-trace/record.trace 00:12:12.675 17:03:04 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # grep 'trace entries for lcore' ./tmp-trace/record.notice 00:12:12.675 17:03:04 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # cut -d ' ' -f 2 00:12:12.675 17:03:04 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # record_num='169778 00:12:12.675 176582 00:12:12.675 176096 00:12:12.675 174007' 00:12:12.675 17:03:04 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # grep 'Trace Size of lcore' ./tmp-trace/trace.log 00:12:12.675 17:03:04 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # cut -d ' ' -f 6 00:12:12.675 17:03:04 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # trace_tool_num='169778 00:12:12.675 176582 00:12:12.675 176096 00:12:12.675 174007' 00:12:12.675 17:03:04 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@105 -- # delete_tmp_files 00:12:12.675 17:03:04 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@19 -- # rm -rf ./tmp-trace 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@107 -- # echo 'entries numbers from trace record are:' 169778 176582 176096 174007 00:12:12.675 entries numbers from trace record are: 169778 176582 176096 174007 00:12:12.675 entries numbers from trace tool are: 169778 176582 176096 174007 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@108 -- # echo 'entries numbers from trace tool are:' 169778 176582 176096 174007 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@110 -- # arr_record_num=($record_num) 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@111 -- # arr_trace_tool_num=($trace_tool_num) 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@112 -- # len_arr_record_num=4 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@113 -- # len_arr_trace_tool_num=4 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@116 -- # '[' 4 -ne 4 ']' 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # seq 0 3 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 169778 -le 4096 ']' 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 169778 -ne 169778 ']' 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 176582 -le 4096 ']' 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 176582 -ne 176582 ']' 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 176096 -le 4096 ']' 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 176096 -ne 176096 ']' 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 174007 -le 4096 ']' 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 174007 -ne 174007 ']' 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@135 -- # trap - SIGINT SIGTERM EXIT 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@136 -- # iscsitestfini 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:12:12.675 00:12:12.675 real 0m19.400s 00:12:12.675 user 0m41.846s 00:12:12.675 sys 0m3.872s 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.675 ************************************ 00:12:12.675 END TEST iscsi_tgt_trace_record 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:12:12.675 ************************************ 00:12:12.675 17:03:05 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@41 -- # run_test iscsi_tgt_login_redirection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:12:12.675 17:03:05 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:12.675 17:03:05 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.675 17:03:05 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:12.675 ************************************ 00:12:12.675 START TEST iscsi_tgt_login_redirection 00:12:12.675 ************************************ 00:12:12.675 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:12:12.935 * Looking for test storage... 00:12:12.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@12 -- # iscsitestinit 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@14 -- # NULL_BDEV_SIZE=64 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@15 -- # NULL_BLOCK_SIZE=512 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@17 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:12:12.935 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@20 -- # rpc_addr1=/var/tmp/spdk0.sock 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@21 -- # rpc_addr2=/var/tmp/spdk1.sock 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@25 -- # timing_enter start_iscsi_tgts 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@28 -- # pid1=68585 00:12:12.936 Process pid: 68585 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@29 -- # echo 'Process pid: 68585' 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@27 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -i 0 -m 0x1 --wait-for-rpc 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@32 -- # pid2=68586 00:12:12.936 Process pid: 68586 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@33 -- # echo 'Process pid: 68586' 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@35 -- # trap 'killprocess $pid1; killprocess $pid2; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@37 -- # waitforlisten 68585 /var/tmp/spdk0.sock 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@831 -- # '[' -z 68585 ']' 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk0.sock 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:12.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:12:12.936 17:03:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -i 1 -m 0x2 --wait-for-rpc 00:12:12.936 [2024-07-25 17:03:05.295658] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:12.936 [2024-07-25 17:03:05.295730] [ DPDK EAL parameters: iscsi -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.936 [2024-07-25 17:03:05.303903] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:12.936 [2024-07-25 17:03:05.303965] [ DPDK EAL parameters: iscsi -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:13.195 [2024-07-25 17:03:05.438499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.195 [2024-07-25 17:03:05.440622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.195 [2024-07-25 17:03:05.535269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.195 [2024-07-25 17:03:05.550264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.843 17:03:06 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:13.843 17:03:06 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@864 -- # return 0 00:12:13.843 17:03:06 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_set_options -w 0 -o 30 -a 16 00:12:13.843 17:03:06 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock framework_start_init 00:12:14.412 iscsi_tgt_1 is listening. 00:12:14.413 17:03:06 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@40 -- # echo 'iscsi_tgt_1 is listening.' 00:12:14.413 17:03:06 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@42 -- # waitforlisten 68586 /var/tmp/spdk1.sock 00:12:14.413 17:03:06 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@831 -- # '[' -z 68586 ']' 00:12:14.413 17:03:06 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk1.sock 00:12:14.413 17:03:06 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:14.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:12:14.413 17:03:06 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:12:14.413 17:03:06 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:14.413 17:03:06 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:12:14.413 17:03:06 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:14.413 17:03:06 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@864 -- # return 0 00:12:14.413 17:03:06 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_set_options -w 0 -o 30 -a 16 00:12:14.671 17:03:07 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock framework_start_init 00:12:15.239 iscsi_tgt_2 is listening. 00:12:15.239 17:03:07 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@45 -- # echo 'iscsi_tgt_2 is listening.' 00:12:15.239 17:03:07 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@47 -- # timing_exit start_iscsi_tgts 00:12:15.239 17:03:07 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:15.239 17:03:07 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:12:15.239 17:03:07 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:12:15.239 17:03:07 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 10.0.0.1:3260 00:12:15.498 17:03:07 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock bdev_null_create Null0 64 512 00:12:15.758 Null0 00:12:15.758 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:12:15.758 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:12:16.017 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 10.0.0.3:3260 -p 00:12:16.276 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock bdev_null_create Null0 64 512 00:12:16.276 Null0 00:12:16.276 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:12:16.535 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@67 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:16.535 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:12:16.535 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@68 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:12:16.535 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:12:16.535 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:12:16.535 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@69 -- # waitforiscsidevices 1 00:12:16.535 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@116 -- # local num=1 00:12:16.535 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:12:16.535 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:12:16.535 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:12:16.535 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:12:16.535 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # true 00:12:16.535 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # n=0 00:12:16.535 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 1 ']' 00:12:16.535 17:03:08 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@121 -- # sleep 0.1 00:12:16.535 [2024-07-25 17:03:08.955282] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.794 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i++ )) 00:12:16.794 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:12:16.794 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:12:16.794 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:12:16.794 FIO pid: 68689 00:12:16.794 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # n=1 00:12:16.794 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:12:16.794 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@123 -- # return 0 00:12:16.794 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@72 -- # fiopid=68689 00:12:16.794 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t randrw -r 15 00:12:16.794 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@73 -- # echo 'FIO pid: 68689' 00:12:16.794 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@75 -- # trap 'iscsicleanup; killprocess $pid1; killprocess $pid2; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:12:16.795 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:12:16.795 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # jq length 00:12:16.795 [global] 00:12:16.795 thread=1 00:12:16.795 invalidate=1 00:12:16.795 rw=randrw 00:12:16.795 time_based=1 00:12:16.795 runtime=15 00:12:16.795 ioengine=libaio 00:12:16.795 direct=1 00:12:16.795 bs=512 00:12:16.795 iodepth=1 00:12:16.795 norandommap=1 00:12:16.795 numjobs=1 00:12:16.795 00:12:16.795 [job0] 00:12:16.795 filename=/dev/sda 00:12:16.795 queue_depth set to 113 (sda) 00:12:17.053 job0: (g=0): rw=randrw, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:12:17.053 fio-3.35 00:12:17.053 Starting 1 thread 00:12:17.053 [2024-07-25 17:03:09.287158] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.054 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # '[' 1 = 1 ']' 00:12:17.054 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:12:17.054 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # jq length 00:12:17.054 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # '[' 0 = 0 ']' 00:12:17.054 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 -a 10.0.0.3 -p 3260 00:12:17.313 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:12:17.572 17:03:09 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@85 -- # sleep 5 00:12:22.842 17:03:14 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # jq length 00:12:22.842 17:03:14 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:12:22.842 17:03:15 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # '[' 0 = 0 ']' 00:12:22.842 17:03:15 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:12:22.842 17:03:15 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # jq length 00:12:23.101 17:03:15 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # '[' 1 = 1 ']' 00:12:23.101 17:03:15 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 00:12:23.101 17:03:15 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:12:23.360 17:03:15 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@93 -- # sleep 5 00:12:28.633 17:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:12:28.633 17:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # jq length 00:12:28.633 17:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # '[' 1 = 1 ']' 00:12:28.633 17:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:12:28.633 17:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # jq length 00:12:28.633 17:03:21 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # '[' 0 = 0 ']' 00:12:28.633 17:03:21 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@98 -- # wait 68689 00:12:32.824 [2024-07-25 17:03:24.395486] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:32.824 00:12:32.824 job0: (groupid=0, jobs=1): err= 0: pid=68711: Thu Jul 25 17:03:24 2024 00:12:32.824 read: IOPS=6668, BW=3334KiB/s (3414kB/s)(48.8MiB/15001msec) 00:12:32.824 slat (nsec): min=3379, max=71369, avg=5184.38, stdev=1147.27 00:12:32.824 clat (usec): min=36, max=2006.1k, avg=68.99, stdev=6342.92 00:12:32.824 lat (usec): min=46, max=2006.1k, avg=74.18, stdev=6342.96 00:12:32.824 clat percentiles (usec): 00:12:32.824 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 48], 00:12:32.824 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 48], 60.00th=[ 48], 00:12:32.824 | 70.00th=[ 49], 80.00th=[ 50], 90.00th=[ 54], 95.00th=[ 57], 00:12:32.824 | 99.00th=[ 65], 99.50th=[ 70], 99.90th=[ 97], 99.95th=[ 139], 00:12:32.824 | 99.99th=[ 457] 00:12:32.824 bw ( KiB/s): min= 320, max= 4746, per=100.00%, avg=4159.30, stdev=1071.84, samples=23 00:12:32.824 iops : min= 640, max= 9492, avg=8318.61, stdev=2143.68, samples=23 00:12:32.824 write: IOPS=6660, BW=3330KiB/s (3410kB/s)(48.8MiB/15001msec); 0 zone resets 00:12:32.824 slat (nsec): min=3234, max=58424, avg=5071.68, stdev=1109.13 00:12:32.824 clat (usec): min=38, max=2006.6k, avg=69.86, stdev=6348.34 00:12:32.824 lat (usec): min=47, max=2006.7k, avg=74.93, stdev=6348.40 00:12:32.824 clat percentiles (usec): 00:12:32.824 | 1.00th=[ 46], 5.00th=[ 46], 10.00th=[ 46], 20.00th=[ 49], 00:12:32.824 | 30.00th=[ 49], 40.00th=[ 49], 50.00th=[ 49], 60.00th=[ 49], 00:12:32.824 | 70.00th=[ 49], 80.00th=[ 51], 90.00th=[ 55], 95.00th=[ 58], 00:12:32.824 | 99.00th=[ 66], 99.50th=[ 76], 99.90th=[ 109], 99.95th=[ 172], 00:12:32.824 | 99.99th=[ 529] 00:12:32.824 bw ( KiB/s): min= 322, max= 4701, per=100.00%, avg=4150.65, stdev=1077.74, samples=23 00:12:32.824 iops : min= 644, max= 9402, avg=8301.30, stdev=2155.49, samples=23 00:12:32.824 lat (usec) : 50=80.23%, 100=19.66%, 250=0.08%, 500=0.02%, 750=0.01% 00:12:32.824 lat (usec) : 1000=0.01% 00:12:32.824 lat (msec) : 2=0.01%, >=2000=0.01% 00:12:32.824 cpu : usr=3.11%, sys=9.92%, ctx=199950, majf=0, minf=1 00:12:32.824 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.824 issued rwts: total=100027,99907,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.824 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.824 00:12:32.824 Run status group 0 (all jobs): 00:12:32.824 READ: bw=3334KiB/s (3414kB/s), 3334KiB/s-3334KiB/s (3414kB/s-3414kB/s), io=48.8MiB (51.2MB), run=15001-15001msec 00:12:32.824 WRITE: bw=3330KiB/s (3410kB/s), 3330KiB/s-3330KiB/s (3410kB/s-3410kB/s), io=48.8MiB (51.2MB), run=15001-15001msec 00:12:32.824 00:12:32.824 Disk stats (read/write): 00:12:32.824 sda: ios=99093/98904, merge=0/0, ticks=6766/6828, in_queue=13594, util=99.45% 00:12:32.824 Cleaning up iSCSI connection 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@100 -- # trap - SIGINT SIGTERM EXIT 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@102 -- # iscsicleanup 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:12:32.824 Logging out of session [sid: 30, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:12:32.824 Logout of [sid: 30, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@985 -- # rm -rf 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@103 -- # killprocess 68585 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@950 -- # '[' -z 68585 ']' 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # kill -0 68585 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@955 -- # uname 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68585 00:12:32.824 killing process with pid 68585 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68585' 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@969 -- # kill 68585 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@974 -- # wait 68585 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@104 -- # killprocess 68586 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@950 -- # '[' -z 68586 ']' 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # kill -0 68586 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@955 -- # uname 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68586 00:12:32.824 killing process with pid 68586 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68586' 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@969 -- # kill 68586 00:12:32.824 17:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@974 -- # wait 68586 00:12:32.824 17:03:25 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@105 -- # iscsitestfini 00:12:32.825 17:03:25 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:12:32.825 ************************************ 00:12:32.825 END TEST iscsi_tgt_login_redirection 00:12:32.825 ************************************ 00:12:32.825 00:12:32.825 real 0m20.136s 00:12:32.825 user 0m39.075s 00:12:32.825 sys 0m6.091s 00:12:32.825 17:03:25 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.825 17:03:25 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:12:32.825 17:03:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@42 -- # run_test iscsi_tgt_digests /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:12:32.825 17:03:25 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:32.825 17:03:25 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.825 17:03:25 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:33.084 ************************************ 00:12:33.084 START TEST iscsi_tgt_digests 00:12:33.084 ************************************ 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:12:33.084 * Looking for test storage... 00:12:33.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@11 -- # iscsitestinit 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@49 -- # MALLOC_BDEV_SIZE=64 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@52 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@54 -- # timing_enter start_iscsi_tgt 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@57 -- # pid=68975 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@56 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:12:33.084 Process pid: 68975 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@58 -- # echo 'Process pid: 68975' 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@60 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@62 -- # waitforlisten 68975 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@831 -- # '[' -z 68975 ']' 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.084 17:03:25 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:12:33.084 [2024-07-25 17:03:25.494668] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:33.084 [2024-07-25 17:03:25.494738] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68975 ] 00:12:33.342 [2024-07-25 17:03:25.635641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.342 [2024-07-25 17:03:25.729431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.342 [2024-07-25 17:03:25.729783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.342 [2024-07-25 17:03:25.729826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.342 [2024-07-25 17:03:25.729970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.910 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.910 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@864 -- # return 0 00:12:33.910 17:03:26 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@63 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:12:33.910 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.910 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:12:33.910 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.911 17:03:26 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@64 -- # rpc_cmd framework_start_init 00:12:33.911 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.911 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:12:34.170 iscsi_tgt is listening. Running tests... 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@65 -- # echo 'iscsi_tgt is listening. Running tests...' 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@67 -- # timing_exit start_iscsi_tgt 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@69 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@70 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@71 -- # rpc_cmd bdev_malloc_create 64 512 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:12:34.170 Malloc0 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@76 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.170 17:03:26 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@77 -- # sleep 1 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@79 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:35.546 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.DataDigest' -v None 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # true 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # DataDigestAbility='iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:12:35.546 iscsiadm: Could not execute operation on all records: invalid parameter' 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@84 -- # '[' 'iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:12:35.546 iscsiadm: Could not execute operation on all records: invalid parameterx' '!=' x ']' 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@85 -- # run_test iscsi_tgt_digest iscsi_header_digest_test 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:12:35.546 ************************************ 00:12:35.546 START TEST iscsi_tgt_digest 00:12:35.546 ************************************ 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1125 -- # iscsi_header_digest_test 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@27 -- # node_login_fio_logout 'HeaderDigest -v CRC32C' 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:12:35.546 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:35.546 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:12:35.546 [2024-07-25 17:03:27.730980] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:12:35.546 17:03:27 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:12:35.546 [global] 00:12:35.546 thread=1 00:12:35.546 invalidate=1 00:12:35.546 rw=write 00:12:35.546 time_based=1 00:12:35.546 runtime=2 00:12:35.546 ioengine=libaio 00:12:35.546 direct=1 00:12:35.546 bs=512 00:12:35.546 iodepth=1 00:12:35.546 norandommap=1 00:12:35.546 numjobs=1 00:12:35.546 00:12:35.546 [job0] 00:12:35.546 filename=/dev/sda 00:12:35.546 queue_depth set to 113 (sda) 00:12:35.546 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:12:35.546 fio-3.35 00:12:35.546 Starting 1 thread 00:12:35.546 [2024-07-25 17:03:27.938023] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:38.132 [2024-07-25 17:03:30.044914] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:38.132 00:12:38.132 job0: (groupid=0, jobs=1): err= 0: pid=69067: Thu Jul 25 17:03:30 2024 00:12:38.132 write: IOPS=14.1k, BW=7051KiB/s (7220kB/s)(13.8MiB/2001msec); 0 zone resets 00:12:38.132 slat (nsec): min=3354, max=63660, avg=5102.54, stdev=1171.28 00:12:38.132 clat (usec): min=52, max=2141, avg=65.35, stdev=21.80 00:12:38.132 lat (usec): min=57, max=2147, avg=70.45, stdev=21.95 00:12:38.132 clat percentiles (usec): 00:12:38.132 | 1.00th=[ 57], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 62], 00:12:38.132 | 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 66], 00:12:38.132 | 70.00th=[ 67], 80.00th=[ 68], 90.00th=[ 71], 95.00th=[ 74], 00:12:38.132 | 99.00th=[ 86], 99.50th=[ 94], 99.90th=[ 126], 99.95th=[ 235], 00:12:38.132 | 99.99th=[ 1860] 00:12:38.132 bw ( KiB/s): min= 6906, max= 7152, per=99.68%, avg=7028.33, stdev=123.01, samples=3 00:12:38.132 iops : min=13812, max=14304, avg=14056.67, stdev=246.01, samples=3 00:12:38.132 lat (usec) : 100=99.70%, 250=0.26%, 500=0.03%, 750=0.01% 00:12:38.132 lat (msec) : 2=0.01%, 4=0.01% 00:12:38.132 cpu : usr=2.95%, sys=11.05%, ctx=28220, majf=0, minf=1 00:12:38.132 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.132 issued rwts: total=0,28217,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.132 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.132 00:12:38.132 Run status group 0 (all jobs): 00:12:38.132 WRITE: bw=7051KiB/s (7220kB/s), 7051KiB/s-7051KiB/s (7220kB/s-7220kB/s), io=13.8MiB (14.4MB), run=2001-2001msec 00:12:38.132 00:12:38.132 Disk stats (read/write): 00:12:38.132 sda: ios=48/26673, merge=0/0, ticks=7/1708, in_queue=1715, util=95.33% 00:12:38.132 17:03:30 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:12:38.132 [global] 00:12:38.132 thread=1 00:12:38.132 invalidate=1 00:12:38.132 rw=read 00:12:38.132 time_based=1 00:12:38.132 runtime=2 00:12:38.132 ioengine=libaio 00:12:38.132 direct=1 00:12:38.132 bs=512 00:12:38.132 iodepth=1 00:12:38.132 norandommap=1 00:12:38.132 numjobs=1 00:12:38.132 00:12:38.132 [job0] 00:12:38.132 filename=/dev/sda 00:12:38.132 queue_depth set to 113 (sda) 00:12:38.132 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:12:38.132 fio-3.35 00:12:38.132 Starting 1 thread 00:12:40.038 00:12:40.038 job0: (groupid=0, jobs=1): err= 0: pid=69127: Thu Jul 25 17:03:32 2024 00:12:40.038 read: IOPS=15.8k, BW=7906KiB/s (8095kB/s)(15.4MiB/2000msec) 00:12:40.038 slat (usec): min=3, max=131, avg= 4.06, stdev= 1.30 00:12:40.038 clat (usec): min=41, max=514, avg=58.61, stdev= 6.69 00:12:40.038 lat (usec): min=53, max=535, avg=62.66, stdev= 7.19 00:12:40.038 clat percentiles (usec): 00:12:40.038 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 56], 00:12:40.038 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 60], 00:12:40.038 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 63], 95.00th=[ 65], 00:12:40.038 | 99.00th=[ 72], 99.50th=[ 80], 99.90th=[ 97], 99.95th=[ 110], 00:12:40.038 | 99.99th=[ 404] 00:12:40.038 bw ( KiB/s): min= 7819, max= 7975, per=100.00%, avg=7922.00, stdev=89.21, samples=3 00:12:40.038 iops : min=15638, max=15950, avg=15844.00, stdev=178.43, samples=3 00:12:40.038 lat (usec) : 50=0.09%, 100=99.83%, 250=0.06%, 500=0.01%, 750=0.01% 00:12:40.038 cpu : usr=4.45%, sys=10.30%, ctx=31656, majf=0, minf=1 00:12:40.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.038 issued rwts: total=31623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.038 00:12:40.038 Run status group 0 (all jobs): 00:12:40.038 READ: bw=7906KiB/s (8095kB/s), 7906KiB/s-7906KiB/s (8095kB/s-8095kB/s), io=15.4MiB (16.2MB), run=2000-2000msec 00:12:40.038 00:12:40.038 Disk stats (read/write): 00:12:40.038 sda: ios=29991/0, merge=0/0, ticks=1666/0, in_queue=1666, util=95.13% 00:12:40.038 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:12:40.038 Logging out of session [sid: 31, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:40.038 Logout of [sid: 31, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:40.038 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:12:40.038 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:12:40.038 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:12:40.038 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:12:40.038 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:12:40.038 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:12:40.038 iscsiadm: No active sessions. 00:12:40.038 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:12:40.038 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:12:40.038 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:12:40.038 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:12:40.038 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@31 -- # node_login_fio_logout 'HeaderDigest -v CRC32C,None' 00:12:40.038 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:12:40.038 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C,None 00:12:40.297 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:12:40.297 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:40.297 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:40.297 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:12:40.297 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:12:40.297 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:12:40.297 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:12:40.297 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:12:40.297 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:12:40.297 [2024-07-25 17:03:32.561771] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:40.297 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:12:40.297 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:12:40.297 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:12:40.297 17:03:32 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:12:40.297 [global] 00:12:40.297 thread=1 00:12:40.297 invalidate=1 00:12:40.297 rw=write 00:12:40.297 time_based=1 00:12:40.297 runtime=2 00:12:40.297 ioengine=libaio 00:12:40.297 direct=1 00:12:40.297 bs=512 00:12:40.297 iodepth=1 00:12:40.297 norandommap=1 00:12:40.297 numjobs=1 00:12:40.297 00:12:40.297 [job0] 00:12:40.297 filename=/dev/sda 00:12:40.297 queue_depth set to 113 (sda) 00:12:40.556 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:12:40.556 fio-3.35 00:12:40.556 Starting 1 thread 00:12:40.556 [2024-07-25 17:03:32.769900] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:42.456 [2024-07-25 17:03:34.891267] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:42.714 00:12:42.714 job0: (groupid=0, jobs=1): err= 0: pid=69197: Thu Jul 25 17:03:34 2024 00:12:42.714 write: IOPS=13.9k, BW=6971KiB/s (7138kB/s)(13.6MiB/2001msec); 0 zone resets 00:12:42.714 slat (nsec): min=3403, max=69185, avg=5462.39, stdev=1955.64 00:12:42.714 clat (usec): min=42, max=3469, avg=65.70, stdev=27.45 00:12:42.714 lat (usec): min=58, max=3478, avg=71.16, stdev=27.74 00:12:42.714 clat percentiles (usec): 00:12:42.714 | 1.00th=[ 58], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 62], 00:12:42.714 | 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 66], 00:12:42.714 | 70.00th=[ 67], 80.00th=[ 69], 90.00th=[ 72], 95.00th=[ 75], 00:12:42.714 | 99.00th=[ 86], 99.50th=[ 98], 99.90th=[ 118], 99.95th=[ 127], 00:12:42.714 | 99.99th=[ 1958] 00:12:42.714 bw ( KiB/s): min= 6820, max= 7125, per=99.40%, avg=6929.00, stdev=170.10, samples=3 00:12:42.714 iops : min=13640, max=14250, avg=13858.00, stdev=340.19, samples=3 00:12:42.714 lat (usec) : 50=0.01%, 100=99.57%, 250=0.39%, 500=0.01%, 750=0.01% 00:12:42.714 lat (usec) : 1000=0.01% 00:12:42.714 lat (msec) : 2=0.01%, 4=0.01% 00:12:42.714 cpu : usr=3.40%, sys=11.80%, ctx=27906, majf=0, minf=1 00:12:42.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:42.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.714 issued rwts: total=0,27896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:42.715 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:42.715 00:12:42.715 Run status group 0 (all jobs): 00:12:42.715 WRITE: bw=6971KiB/s (7138kB/s), 6971KiB/s-6971KiB/s (7138kB/s-7138kB/s), io=13.6MiB (14.3MB), run=2001-2001msec 00:12:42.715 00:12:42.715 Disk stats (read/write): 00:12:42.715 sda: ios=48/26152, merge=0/0, ticks=6/1672, in_queue=1678, util=95.39% 00:12:42.715 17:03:34 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:12:42.715 [global] 00:12:42.715 thread=1 00:12:42.715 invalidate=1 00:12:42.715 rw=read 00:12:42.715 time_based=1 00:12:42.715 runtime=2 00:12:42.715 ioengine=libaio 00:12:42.715 direct=1 00:12:42.715 bs=512 00:12:42.715 iodepth=1 00:12:42.715 norandommap=1 00:12:42.715 numjobs=1 00:12:42.715 00:12:42.715 [job0] 00:12:42.715 filename=/dev/sda 00:12:42.715 queue_depth set to 113 (sda) 00:12:42.715 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:12:42.715 fio-3.35 00:12:42.715 Starting 1 thread 00:12:45.281 00:12:45.281 job0: (groupid=0, jobs=1): err= 0: pid=69250: Thu Jul 25 17:03:37 2024 00:12:45.281 read: IOPS=15.6k, BW=7780KiB/s (7967kB/s)(15.2MiB/2001msec) 00:12:45.281 slat (nsec): min=3393, max=91304, avg=4806.03, stdev=1652.41 00:12:45.281 clat (usec): min=2, max=571, avg=58.91, stdev= 6.41 00:12:45.281 lat (usec): min=52, max=576, avg=63.71, stdev= 6.67 00:12:45.281 clat percentiles (usec): 00:12:45.281 | 1.00th=[ 51], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 55], 00:12:45.281 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 60], 00:12:45.281 | 70.00th=[ 61], 80.00th=[ 63], 90.00th=[ 65], 95.00th=[ 68], 00:12:45.281 | 99.00th=[ 79], 99.50th=[ 83], 99.90th=[ 99], 99.95th=[ 124], 00:12:45.281 | 99.99th=[ 165] 00:12:45.281 bw ( KiB/s): min= 7633, max= 8042, per=100.00%, avg=7885.00, stdev=220.43, samples=3 00:12:45.281 iops : min=15266, max=16084, avg=15770.00, stdev=440.86, samples=3 00:12:45.281 lat (usec) : 4=0.01%, 50=0.37%, 100=99.53%, 250=0.10%, 750=0.01% 00:12:45.281 cpu : usr=3.85%, sys=11.75%, ctx=31170, majf=0, minf=1 00:12:45.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:45.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.281 issued rwts: total=31135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:45.281 00:12:45.281 Run status group 0 (all jobs): 00:12:45.281 READ: bw=7780KiB/s (7967kB/s), 7780KiB/s-7780KiB/s (7967kB/s-7967kB/s), io=15.2MiB (15.9MB), run=2001-2001msec 00:12:45.281 00:12:45.281 Disk stats (read/write): 00:12:45.281 sda: ios=29533/0, merge=0/0, ticks=1677/0, in_queue=1676, util=95.13% 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:12:45.281 Logging out of session [sid: 32, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:45.281 Logout of [sid: 32, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:12:45.281 iscsiadm: No active sessions. 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:12:45.281 00:12:45.281 real 0m9.679s 00:12:45.281 user 0m0.898s 00:12:45.281 sys 0m1.317s 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@10 -- # set +x 00:12:45.281 ************************************ 00:12:45.281 END TEST iscsi_tgt_digest 00:12:45.281 ************************************ 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@92 -- # iscsicleanup 00:12:45.281 Cleaning up iSCSI connection 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:12:45.281 iscsiadm: No matching sessions found 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@983 -- # true 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@985 -- # rm -rf 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@93 -- # killprocess 68975 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@950 -- # '[' -z 68975 ']' 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # kill -0 68975 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@955 -- # uname 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68975 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68975' 00:12:45.281 killing process with pid 68975 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@969 -- # kill 68975 00:12:45.281 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@974 -- # wait 68975 00:12:45.540 17:03:37 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@94 -- # iscsitestfini 00:12:45.540 17:03:37 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:12:45.540 00:12:45.540 real 0m12.521s 00:12:45.540 user 0m46.149s 00:12:45.540 sys 0m3.789s 00:12:45.540 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:45.540 17:03:37 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:12:45.540 ************************************ 00:12:45.540 END TEST iscsi_tgt_digests 00:12:45.540 ************************************ 00:12:45.540 17:03:37 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@43 -- # run_test iscsi_tgt_fuzz /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:12:45.540 17:03:37 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:45.540 17:03:37 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:45.540 17:03:37 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:45.540 ************************************ 00:12:45.540 START TEST iscsi_tgt_fuzz 00:12:45.540 ************************************ 00:12:45.540 17:03:37 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:12:45.540 * Looking for test storage... 00:12:45.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/fuzz 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@11 -- # iscsitestinit 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@13 -- # '[' -z 10.0.0.1 ']' 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@18 -- # '[' -z 10.0.0.2 ']' 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@23 -- # timing_enter iscsi_fuzz_test 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@25 -- # MALLOC_BDEV_SIZE=64 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@26 -- # MALLOC_BLOCK_SIZE=4096 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@28 -- # TEST_TIMEOUT=1200 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@31 -- # for i in "$@" 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@32 -- # case "$i" in 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@34 -- # TEST_TIMEOUT=30 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@39 -- # timing_enter start_iscsi_tgt 00:12:45.798 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:45.799 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:45.799 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@42 -- # iscsipid=69351 00:12:45.799 Process iscsipid: 69351 00:12:45.799 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@43 -- # echo 'Process iscsipid: 69351' 00:12:45.799 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@45 -- # trap 'killprocess $iscsipid; exit 1' SIGINT SIGTERM EXIT 00:12:45.799 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@47 -- # waitforlisten 69351 00:12:45.799 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@831 -- # '[' -z 69351 ']' 00:12:45.799 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.799 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:45.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.799 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.799 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:45.799 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:45.799 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --disable-cpumask-locks --wait-for-rpc 00:12:46.733 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:46.733 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@864 -- # return 0 00:12:46.733 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@49 -- # rpc_cmd iscsi_set_options -o 60 -a 16 00:12:46.733 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.733 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:46.733 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.733 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@50 -- # rpc_cmd framework_start_init 00:12:46.733 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.733 17:03:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:46.733 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.733 iscsi_tgt is listening. Running tests... 00:12:46.733 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@51 -- # echo 'iscsi_tgt is listening. Running tests...' 00:12:46.733 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@52 -- # timing_exit start_iscsi_tgt 00:12:46.733 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:46.733 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:46.733 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@54 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:12:46.733 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.733 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:46.733 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.733 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@55 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:12:46.733 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.733 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:46.733 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.733 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@56 -- # rpc_cmd bdev_malloc_create 64 4096 00:12:46.733 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.733 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:46.991 Malloc0 00:12:46.991 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.991 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@57 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:12:46.991 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.991 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:46.991 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.991 17:03:39 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@58 -- # sleep 1 00:12:47.925 17:03:40 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@60 -- # trap 'killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:12:47.925 17:03:40 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/iscsi_fuzz/iscsi_fuzz -m 0xF0 -T 10.0.0.1 -t 30 00:13:19.999 pdu received after logout 00:13:19.999 Fuzzing completed. Shutting down the fuzz application. 00:13:19.999 00:13:19.999 device 0x1d71d40 stats: Sent 15063 valid opcode PDUs, 139401 invalid opcode PDUs. 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@64 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:disk1 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@67 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@71 -- # killprocess 69351 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@950 -- # '[' -z 69351 ']' 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # kill -0 69351 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@955 -- # uname 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69351 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69351' 00:13:19.999 killing process with pid 69351 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@969 -- # kill 69351 00:13:19.999 17:04:10 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@974 -- # wait 69351 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@73 -- # iscsitestfini 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@75 -- # timing_exit iscsi_fuzz_test 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:19.999 00:13:19.999 real 0m33.175s 00:13:19.999 user 3m8.015s 00:13:19.999 sys 0m17.202s 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:19.999 ************************************ 00:13:19.999 END TEST iscsi_tgt_fuzz 00:13:19.999 ************************************ 00:13:19.999 17:04:11 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@44 -- # run_test iscsi_tgt_multiconnection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:13:19.999 17:04:11 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:19.999 17:04:11 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.999 17:04:11 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:13:19.999 ************************************ 00:13:19.999 START TEST iscsi_tgt_multiconnection 00:13:19.999 ************************************ 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:13:19.999 * Looking for test storage... 00:13:19.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:13:19.999 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@11 -- # iscsitestinit 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@18 -- # CONNECTION_NUMBER=30 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@40 -- # timing_enter start_iscsi_tgt 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@42 -- # iscsipid=69794 00:13:20.000 iSCSI target launched. pid: 69794 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@43 -- # echo 'iSCSI target launched. pid: 69794' 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@44 -- # trap 'remove_backends; iscsicleanup; killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@46 -- # waitforlisten 69794 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 69794 ']' 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.000 17:04:11 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:13:20.000 [2024-07-25 17:04:11.332842] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:20.000 [2024-07-25 17:04:11.332917] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69794 ] 00:13:20.000 [2024-07-25 17:04:11.474184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.000 [2024-07-25 17:04:11.549299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.000 17:04:12 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.000 17:04:12 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:13:20.000 17:04:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 128 00:13:20.000 17:04:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:20.259 17:04:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:20.259 17:04:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:13:20.824 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@50 -- # timing_exit start_iscsi_tgt 00:13:20.824 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:20.824 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:13:20.824 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:13:20.824 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:13:21.081 Creating an iSCSI target node. 00:13:21.081 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@55 -- # echo 'Creating an iSCSI target node.' 00:13:21.081 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs0 -c 1048576 00:13:21.339 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # ls_guid=31c838c3-1e67-4e16-9645-b2a9a62e388d 00:13:21.339 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@59 -- # get_lvs_free_mb 31c838c3-1e67-4e16-9645-b2a9a62e388d 00:13:21.339 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1364 -- # local lvs_uuid=31c838c3-1e67-4e16-9645-b2a9a62e388d 00:13:21.339 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1365 -- # local lvs_info 00:13:21.339 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1366 -- # local fc 00:13:21.339 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1367 -- # local cs 00:13:21.339 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:13:21.598 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:13:21.598 { 00:13:21.598 "uuid": "31c838c3-1e67-4e16-9645-b2a9a62e388d", 00:13:21.598 "name": "lvs0", 00:13:21.598 "base_bdev": "Nvme0n1", 00:13:21.598 "total_data_clusters": 5099, 00:13:21.598 "free_clusters": 5099, 00:13:21.598 "block_size": 4096, 00:13:21.598 "cluster_size": 1048576 00:13:21.598 } 00:13:21.598 ]' 00:13:21.598 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="31c838c3-1e67-4e16-9645-b2a9a62e388d") .free_clusters' 00:13:21.598 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1369 -- # fc=5099 00:13:21.598 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="31c838c3-1e67-4e16-9645-b2a9a62e388d") .cluster_size' 00:13:21.598 5099 00:13:21.598 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1370 -- # cs=1048576 00:13:21.598 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1373 -- # free_mb=5099 00:13:21.598 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1374 -- # echo 5099 00:13:21.598 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@60 -- # lvol_bdev_size=169 00:13:21.598 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # seq 1 30 00:13:21.598 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:21.598 17:04:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_1 169 00:13:21.857 9d133700-e5ad-4657-b0f6-42a132ee2745 00:13:21.857 17:04:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:21.857 17:04:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_2 169 00:13:22.130 292ecd28-a242-4725-8295-63094981196f 00:13:22.130 17:04:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:22.130 17:04:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_3 169 00:13:22.130 da1c6a39-5162-47aa-bb01-05941f7443a2 00:13:22.130 17:04:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:22.130 17:04:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_4 169 00:13:22.389 ea2a2c5e-d7c4-439a-9e2e-270aca409d43 00:13:22.389 17:04:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:22.389 17:04:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_5 169 00:13:22.647 2e398561-4589-48bc-b3f3-3b4d5ec98421 00:13:22.647 17:04:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:22.647 17:04:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_6 169 00:13:22.647 94a830fc-d684-4dcb-a624-53532d86d75a 00:13:22.647 17:04:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:22.647 17:04:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_7 169 00:13:22.906 91f479bf-265a-46f9-82ec-91091a797ef3 00:13:22.906 17:04:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:22.906 17:04:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_8 169 00:13:23.164 3dedd105-e11b-4f93-8baf-583c87f514af 00:13:23.164 17:04:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:23.164 17:04:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_9 169 00:13:23.423 330ad194-25ec-44c4-95d0-90f1591f7d09 00:13:23.423 17:04:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:23.423 17:04:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_10 169 00:13:23.423 8215f815-1bd7-4ad2-8638-913cfe809b00 00:13:23.423 17:04:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:23.423 17:04:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_11 169 00:13:23.682 679b0cc0-0d8c-4ddf-b2c9-cdff92492b50 00:13:23.682 17:04:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:23.682 17:04:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_12 169 00:13:23.941 5477e787-47dc-4ec9-a8e4-fa048d3c1cfa 00:13:23.941 17:04:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:23.941 17:04:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_13 169 00:13:23.941 33700d0b-3961-4760-b25e-c62b41888a92 00:13:23.941 17:04:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:23.941 17:04:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_14 169 00:13:24.199 c0703189-44a0-4626-8446-82079f97f99d 00:13:24.199 17:04:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:24.199 17:04:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_15 169 00:13:24.458 669c0af9-c253-4e02-ab1a-82a70c6ec228 00:13:24.458 17:04:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:24.458 17:04:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_16 169 00:13:24.717 8ffb0399-4f4a-49ab-8c75-e209ea00ed5b 00:13:24.717 17:04:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:24.717 17:04:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_17 169 00:13:24.717 3bc4fea5-84e2-468c-bb16-27c1971eda78 00:13:24.717 17:04:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:24.717 17:04:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_18 169 00:13:24.975 a66cd048-6ceb-4e67-b6b6-202ca8711682 00:13:24.975 17:04:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:24.975 17:04:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_19 169 00:13:25.236 9ffeec2f-a71f-4d57-85c6-9a7f43347dc4 00:13:25.236 17:04:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:25.236 17:04:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_20 169 00:13:25.495 cefeee94-1200-4056-a98a-564411c9656f 00:13:25.496 17:04:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:25.496 17:04:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_21 169 00:13:25.496 a0b19b4f-eb21-4b26-bdec-07ac611a070b 00:13:25.496 17:04:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:25.496 17:04:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_22 169 00:13:25.754 190a840e-aa4f-4d94-a493-29015293b8c4 00:13:25.754 17:04:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:25.754 17:04:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_23 169 00:13:26.013 586c2363-87da-4c84-beb1-396d8ee5e9cc 00:13:26.013 17:04:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:26.013 17:04:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_24 169 00:13:26.013 3807995b-9d7a-4739-bf90-8e6c2a0027ea 00:13:26.013 17:04:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:26.013 17:04:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_25 169 00:13:26.272 9c8016c6-7e96-4ea6-8182-ba155bf1e687 00:13:26.272 17:04:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:26.272 17:04:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_26 169 00:13:26.531 6c1a1bee-2133-48dd-a62b-806d4ed7af37 00:13:26.531 17:04:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:26.531 17:04:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_27 169 00:13:26.790 7943ced5-f85b-4394-88db-e9bfcb94c3ef 00:13:26.790 17:04:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:26.790 17:04:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_28 169 00:13:26.790 1cb584d9-b230-4abc-92bc-005766094dad 00:13:26.790 17:04:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:26.790 17:04:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_29 169 00:13:27.048 9adc308b-3d30-480e-9f07-22197a1913c5 00:13:27.048 17:04:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:27.048 17:04:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31c838c3-1e67-4e16-9645-b2a9a62e388d lbd_30 169 00:13:27.307 ce93fd0a-a0b6-4720-b48e-40ccaf6ec36b 00:13:27.307 17:04:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # seq 1 30 00:13:27.307 17:04:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:27.307 17:04:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_1:0 00:13:27.307 17:04:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias lvs0/lbd_1:0 1:2 256 -d 00:13:27.566 17:04:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:27.566 17:04:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_2:0 00:13:27.566 17:04:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias lvs0/lbd_2:0 1:2 256 -d 00:13:27.566 17:04:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:27.566 17:04:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_3:0 00:13:27.566 17:04:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias lvs0/lbd_3:0 1:2 256 -d 00:13:27.834 17:04:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:27.834 17:04:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_4:0 00:13:27.834 17:04:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias lvs0/lbd_4:0 1:2 256 -d 00:13:28.101 17:04:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:28.101 17:04:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_5:0 00:13:28.101 17:04:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias lvs0/lbd_5:0 1:2 256 -d 00:13:28.101 17:04:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:28.101 17:04:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_6:0 00:13:28.101 17:04:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias lvs0/lbd_6:0 1:2 256 -d 00:13:28.361 17:04:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:28.361 17:04:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_7:0 00:13:28.361 17:04:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias lvs0/lbd_7:0 1:2 256 -d 00:13:28.620 17:04:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:28.620 17:04:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_8:0 00:13:28.620 17:04:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias lvs0/lbd_8:0 1:2 256 -d 00:13:28.879 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:28.879 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_9:0 00:13:28.879 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias lvs0/lbd_9:0 1:2 256 -d 00:13:28.879 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:28.879 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_10:0 00:13:28.879 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias lvs0/lbd_10:0 1:2 256 -d 00:13:29.138 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:29.138 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_11:0 00:13:29.138 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target11 Target11_alias lvs0/lbd_11:0 1:2 256 -d 00:13:29.398 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:29.398 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_12:0 00:13:29.398 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target12 Target12_alias lvs0/lbd_12:0 1:2 256 -d 00:13:29.398 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:29.398 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_13:0 00:13:29.398 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target13 Target13_alias lvs0/lbd_13:0 1:2 256 -d 00:13:29.657 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:29.657 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_14:0 00:13:29.657 17:04:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target14 Target14_alias lvs0/lbd_14:0 1:2 256 -d 00:13:29.916 17:04:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:29.916 17:04:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_15:0 00:13:29.916 17:04:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target15 Target15_alias lvs0/lbd_15:0 1:2 256 -d 00:13:29.916 17:04:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:29.916 17:04:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_16:0 00:13:29.916 17:04:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target16 Target16_alias lvs0/lbd_16:0 1:2 256 -d 00:13:30.175 17:04:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:30.175 17:04:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_17:0 00:13:30.175 17:04:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target17 Target17_alias lvs0/lbd_17:0 1:2 256 -d 00:13:30.434 17:04:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:30.434 17:04:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_18:0 00:13:30.434 17:04:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target18 Target18_alias lvs0/lbd_18:0 1:2 256 -d 00:13:30.434 17:04:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:30.434 17:04:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_19:0 00:13:30.434 17:04:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target19 Target19_alias lvs0/lbd_19:0 1:2 256 -d 00:13:30.693 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:30.693 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_20:0 00:13:30.693 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target20 Target20_alias lvs0/lbd_20:0 1:2 256 -d 00:13:30.952 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:30.952 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_21:0 00:13:30.952 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target21 Target21_alias lvs0/lbd_21:0 1:2 256 -d 00:13:30.952 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:30.952 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_22:0 00:13:30.952 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target22 Target22_alias lvs0/lbd_22:0 1:2 256 -d 00:13:31.210 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:31.210 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_23:0 00:13:31.210 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target23 Target23_alias lvs0/lbd_23:0 1:2 256 -d 00:13:31.467 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:31.467 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_24:0 00:13:31.467 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target24 Target24_alias lvs0/lbd_24:0 1:2 256 -d 00:13:31.726 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:31.726 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_25:0 00:13:31.726 17:04:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target25 Target25_alias lvs0/lbd_25:0 1:2 256 -d 00:13:31.726 17:04:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:31.726 17:04:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_26:0 00:13:31.726 17:04:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target26 Target26_alias lvs0/lbd_26:0 1:2 256 -d 00:13:31.987 17:04:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:31.987 17:04:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_27:0 00:13:31.987 17:04:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target27 Target27_alias lvs0/lbd_27:0 1:2 256 -d 00:13:32.246 17:04:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:32.246 17:04:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_28:0 00:13:32.246 17:04:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target28 Target28_alias lvs0/lbd_28:0 1:2 256 -d 00:13:32.506 17:04:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:32.506 17:04:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_29:0 00:13:32.506 17:04:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target29 Target29_alias lvs0/lbd_29:0 1:2 256 -d 00:13:32.506 17:04:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:32.506 17:04:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_30:0 00:13:32.506 17:04:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target30 Target30_alias lvs0/lbd_30:0 1:2 256 -d 00:13:32.765 17:04:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@69 -- # sleep 1 00:13:33.702 Logging into iSCSI target. 00:13:33.702 17:04:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@71 -- # echo 'Logging into iSCSI target.' 00:13:33.702 17:04:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@72 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target16 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target17 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target18 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target19 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target20 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target21 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target22 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target23 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target24 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target25 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target26 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target27 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target28 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target29 00:13:33.702 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target30 00:13:33.702 17:04:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@73 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:13:33.960 [2024-07-25 17:04:26.224620] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.960 [2024-07-25 17:04:26.251317] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.960 [2024-07-25 17:04:26.266176] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.961 [2024-07-25 17:04:26.296615] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.961 [2024-07-25 17:04:26.307427] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.961 [2024-07-25 17:04:26.307549] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.961 [2024-07-25 17:04:26.320739] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.961 [2024-07-25 17:04:26.331632] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.961 [2024-07-25 17:04:26.373510] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.961 [2024-07-25 17:04:26.397732] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.961 [2024-07-25 17:04:26.402353] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.961 [2024-07-25 17:04:26.424647] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.220 [2024-07-25 17:04:26.447828] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:13:34.220 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:13:34.220 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:13:34.220 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:13:34.220 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:13:34.220 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:13:34.220 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:13:34.220 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:13:34.220 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:13:34.220 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:13:34.220 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:13:34.220 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:13:34.220 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:13:34.220 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:13:34.220 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:13:34.220 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, por[2024-07-25 17:04:26.466739] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.220 [2024-07-25 17:04:26.481455] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.220 [2024-07-25 17:04:26.506426] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.220 [2024-07-25 17:04:26.517133] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.220 [2024-07-25 17:04:26.551925] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.220 [2024-07-25 17:04:26.570622] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.220 [2024-07-25 17:04:26.595225] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.220 [2024-07-25 17:04:26.617920] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.220 [2024-07-25 17:04:26.643245] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.220 [2024-07-25 17:04:26.672360] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.479 [2024-07-25 17:04:26.693454] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.479 [2024-07-25 17:04:26.717605] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.479 [2024-07-25 17:04:26.754664] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.479 [2024-07-25 17:04:26.776866] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.479 [2024-07-25 17:04:26.809993] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.479 [2024-07-25 17:04:26.835282] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.479 tal: 10.0.0.1,3260] successful. 00:13:34.479 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:13:34.479 Login to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:13:34.479 Login to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:13:34.479 Login to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:13:34.479 Login to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:13:34.479 Login to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:13:34.479 Login to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:13:34.479 Login to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:13:34.479 Login to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:13:34.479 Login to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:13:34.479 Login to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:13:34.479 Login to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:13:34.479 Login to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:13:34.479 Login to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:13:34.479 Login to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:13:34.479 Login to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:13:34.479 17:04:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@74 -- # waitforiscsidevices 30 00:13:34.479 17:04:26 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@116 -- # local num=30 00:13:34.479 17:04:26 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:13:34.479 17:04:26 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:13:34.479 17:04:26 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:13:34.479 17:04:26 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:13:34.479 [2024-07-25 17:04:26.857473] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.479 17:04:26 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # n=30 00:13:34.479 17:04:26 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@120 -- # '[' 30 -ne 30 ']' 00:13:34.479 17:04:26 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@123 -- # return 0 00:13:34.479 Running FIO 00:13:34.479 17:04:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@76 -- # echo 'Running FIO' 00:13:34.479 17:04:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 64 -t randrw -r 5 00:13:34.738 [global] 00:13:34.738 thread=1 00:13:34.738 invalidate=1 00:13:34.738 rw=randrw 00:13:34.738 time_based=1 00:13:34.738 runtime=5 00:13:34.738 ioengine=libaio 00:13:34.738 direct=1 00:13:34.738 bs=131072 00:13:34.738 iodepth=64 00:13:34.738 norandommap=1 00:13:34.738 numjobs=1 00:13:34.738 00:13:34.738 [job0] 00:13:34.738 filename=/dev/sda 00:13:34.738 [job1] 00:13:34.738 filename=/dev/sdb 00:13:34.738 [job2] 00:13:34.738 filename=/dev/sdc 00:13:34.738 [job3] 00:13:34.738 filename=/dev/sdd 00:13:34.738 [job4] 00:13:34.738 filename=/dev/sde 00:13:34.738 [job5] 00:13:34.738 filename=/dev/sdf 00:13:34.738 [job6] 00:13:34.738 filename=/dev/sdg 00:13:34.738 [job7] 00:13:34.738 filename=/dev/sdh 00:13:34.738 [job8] 00:13:34.738 filename=/dev/sdi 00:13:34.738 [job9] 00:13:34.738 filename=/dev/sdj 00:13:34.738 [job10] 00:13:34.738 filename=/dev/sdk 00:13:34.738 [job11] 00:13:34.738 filename=/dev/sdl 00:13:34.738 [job12] 00:13:34.738 filename=/dev/sdm 00:13:34.738 [job13] 00:13:34.738 filename=/dev/sdn 00:13:34.738 [job14] 00:13:34.738 filename=/dev/sdo 00:13:34.738 [job15] 00:13:34.738 filename=/dev/sdp 00:13:34.738 [job16] 00:13:34.738 filename=/dev/sdq 00:13:34.738 [job17] 00:13:34.738 filename=/dev/sdr 00:13:34.738 [job18] 00:13:34.738 filename=/dev/sds 00:13:34.738 [job19] 00:13:34.738 filename=/dev/sdt 00:13:34.738 [job20] 00:13:34.738 filename=/dev/sdu 00:13:34.738 [job21] 00:13:34.738 filename=/dev/sdv 00:13:34.738 [job22] 00:13:34.738 filename=/dev/sdw 00:13:34.738 [job23] 00:13:34.738 filename=/dev/sdx 00:13:34.738 [job24] 00:13:34.738 filename=/dev/sdy 00:13:34.738 [job25] 00:13:34.738 filename=/dev/sdz 00:13:34.738 [job26] 00:13:34.738 filename=/dev/sdaa 00:13:34.738 [job27] 00:13:34.738 filename=/dev/sdab 00:13:34.738 [job28] 00:13:34.738 filename=/dev/sdac 00:13:34.738 [job29] 00:13:34.738 filename=/dev/sdad 00:13:35.306 queue_depth set to 113 (sda) 00:13:35.306 queue_depth set to 113 (sdb) 00:13:35.306 queue_depth set to 113 (sdc) 00:13:35.306 queue_depth set to 113 (sdd) 00:13:35.565 queue_depth set to 113 (sde) 00:13:35.565 queue_depth set to 113 (sdf) 00:13:35.565 queue_depth set to 113 (sdg) 00:13:35.565 queue_depth set to 113 (sdh) 00:13:35.565 queue_depth set to 113 (sdi) 00:13:35.565 queue_depth set to 113 (sdj) 00:13:35.565 queue_depth set to 113 (sdk) 00:13:35.565 queue_depth set to 113 (sdl) 00:13:35.565 queue_depth set to 113 (sdm) 00:13:35.565 queue_depth set to 113 (sdn) 00:13:35.565 queue_depth set to 113 (sdo) 00:13:35.565 queue_depth set to 113 (sdp) 00:13:35.565 queue_depth set to 113 (sdq) 00:13:35.823 queue_depth set to 113 (sdr) 00:13:35.823 queue_depth set to 113 (sds) 00:13:35.823 queue_depth set to 113 (sdt) 00:13:35.823 queue_depth set to 113 (sdu) 00:13:35.823 queue_depth set to 113 (sdv) 00:13:35.823 queue_depth set to 113 (sdw) 00:13:35.823 queue_depth set to 113 (sdx) 00:13:35.823 queue_depth set to 113 (sdy) 00:13:35.823 queue_depth set to 113 (sdz) 00:13:35.823 queue_depth set to 113 (sdaa) 00:13:35.823 queue_depth set to 113 (sdab) 00:13:35.823 queue_depth set to 113 (sdac) 00:13:36.082 queue_depth set to 113 (sdad) 00:13:36.082 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job16: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job17: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job18: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job19: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job20: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job21: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job22: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job23: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job24: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job25: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job26: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job27: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job28: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 job29: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:13:36.082 fio-3.35 00:13:36.082 Starting 30 threads 00:13:36.082 [2024-07-25 17:04:28.509601] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.082 [2024-07-25 17:04:28.513641] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.082 [2024-07-25 17:04:28.517552] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.082 [2024-07-25 17:04:28.521399] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.082 [2024-07-25 17:04:28.524526] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.082 [2024-07-25 17:04:28.527733] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.082 [2024-07-25 17:04:28.530787] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.082 [2024-07-25 17:04:28.533393] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.082 [2024-07-25 17:04:28.535911] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.082 [2024-07-25 17:04:28.538385] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.082 [2024-07-25 17:04:28.540967] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.082 [2024-07-25 17:04:28.543094] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.082 [2024-07-25 17:04:28.545128] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.082 [2024-07-25 17:04:28.547106] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.082 [2024-07-25 17:04:28.549206] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.341 [2024-07-25 17:04:28.551228] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.341 [2024-07-25 17:04:28.553101] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.341 [2024-07-25 17:04:28.554776] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.341 [2024-07-25 17:04:28.556518] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.341 [2024-07-25 17:04:28.558210] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.341 [2024-07-25 17:04:28.559868] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.341 [2024-07-25 17:04:28.561502] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.341 [2024-07-25 17:04:28.563115] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.341 [2024-07-25 17:04:28.564606] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.341 [2024-07-25 17:04:28.566119] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.341 [2024-07-25 17:04:28.567612] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.341 [2024-07-25 17:04:28.569164] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.341 [2024-07-25 17:04:28.570645] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.341 [2024-07-25 17:04:28.572124] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:36.341 [2024-07-25 17:04:28.573604] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.918 [2024-07-25 17:04:34.490968] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.918 [2024-07-25 17:04:34.498422] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.918 [2024-07-25 17:04:34.501666] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.918 [2024-07-25 17:04:34.504399] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.918 [2024-07-25 17:04:34.507689] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.918 [2024-07-25 17:04:34.509786] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.918 [2024-07-25 17:04:34.512097] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.918 [2024-07-25 17:04:34.514812] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.918 [2024-07-25 17:04:34.516978] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.918 [2024-07-25 17:04:34.519073] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.918 [2024-07-25 17:04:34.520773] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.918 [2024-07-25 17:04:34.522829] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.918 [2024-07-25 17:04:34.524776] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.918 [2024-07-25 17:04:34.526476] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.918 [2024-07-25 17:04:34.528160] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.919 [2024-07-25 17:04:34.530014] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.919 [2024-07-25 17:04:34.531879] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.919 [2024-07-25 17:04:34.533569] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.919 [2024-07-25 17:04:34.538036] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.919 [2024-07-25 17:04:34.539866] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.919 [2024-07-25 17:04:34.541662] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.919 [2024-07-25 17:04:34.543426] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.919 [2024-07-25 17:04:34.545064] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.919 [2024-07-25 17:04:34.546794] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.919 [2024-07-25 17:04:34.548739] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.919 00:13:42.919 job0: (groupid=0, jobs=1): err= 0: pid=70699: Thu Jul 25 17:04:34 2024 00:13:42.919 read: IOPS=97, BW=12.2MiB/s (12.8MB/s)(65.1MiB/5346msec) 00:13:42.919 slat (usec): min=8, max=601, avg=40.90, stdev=38.82 00:13:42.919 clat (msec): min=26, max=363, avg=44.93, stdev=29.99 00:13:42.919 lat (msec): min=26, max=363, avg=44.97, stdev=29.99 00:13:42.919 clat percentiles (msec): 00:13:42.919 | 1.00th=[ 29], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 37], 00:13:42.919 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:13:42.919 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 54], 95.00th=[ 87], 00:13:42.919 | 99.00th=[ 150], 99.50th=[ 355], 99.90th=[ 363], 99.95th=[ 363], 00:13:42.919 | 99.99th=[ 363] 00:13:42.919 bw ( KiB/s): min= 9472, max=21248, per=3.35%, avg=13255.30, stdev=3341.47, samples=10 00:13:42.919 iops : min= 74, max= 166, avg=103.40, stdev=26.11, samples=10 00:13:42.919 write: IOPS=104, BW=13.1MiB/s (13.7MB/s)(69.9MiB/5346msec); 0 zone resets 00:13:42.919 slat (usec): min=13, max=1115, avg=54.98, stdev=65.07 00:13:42.919 clat (msec): min=149, max=894, avg=569.30, stdev=80.34 00:13:42.919 lat (msec): min=149, max=894, avg=569.36, stdev=80.34 00:13:42.919 clat percentiles (msec): 00:13:42.919 | 1.00th=[ 239], 5.00th=[ 422], 10.00th=[ 523], 20.00th=[ 558], 00:13:42.919 | 30.00th=[ 567], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 584], 00:13:42.919 | 70.00th=[ 584], 80.00th=[ 592], 90.00th=[ 609], 95.00th=[ 625], 00:13:42.919 | 99.00th=[ 844], 99.50th=[ 877], 99.90th=[ 894], 99.95th=[ 894], 00:13:42.919 | 99.99th=[ 894] 00:13:42.919 bw ( KiB/s): min= 7168, max=13824, per=3.20%, avg=12768.80, stdev=1989.50, samples=10 00:13:42.919 iops : min= 56, max= 108, avg=99.60, stdev=15.48, samples=10 00:13:42.919 lat (msec) : 50=43.33%, 100=3.06%, 250=2.13%, 500=3.89%, 750=46.02% 00:13:42.919 lat (msec) : 1000=1.57% 00:13:42.919 cpu : usr=0.36%, sys=0.80%, ctx=624, majf=0, minf=1 00:13:42.919 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.2% 00:13:42.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.919 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.919 issued rwts: total=521,559,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.919 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.919 job1: (groupid=0, jobs=1): err= 0: pid=70700: Thu Jul 25 17:04:34 2024 00:13:42.919 read: IOPS=95, BW=11.9MiB/s (12.5MB/s)(63.6MiB/5346msec) 00:13:42.919 slat (usec): min=8, max=1459, avg=51.63, stdev=113.09 00:13:42.919 clat (msec): min=25, max=375, avg=46.42, stdev=31.95 00:13:42.919 lat (msec): min=25, max=376, avg=46.47, stdev=31.95 00:13:42.919 clat percentiles (msec): 00:13:42.919 | 1.00th=[ 29], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 37], 00:13:42.919 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:13:42.919 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 69], 95.00th=[ 103], 00:13:42.919 | 99.00th=[ 144], 99.50th=[ 347], 99.90th=[ 376], 99.95th=[ 376], 00:13:42.919 | 99.99th=[ 376] 00:13:42.919 bw ( KiB/s): min= 8448, max=18944, per=3.26%, avg=12923.30, stdev=3468.01, samples=10 00:13:42.919 iops : min= 66, max= 148, avg=100.80, stdev=27.18, samples=10 00:13:42.919 write: IOPS=104, BW=13.0MiB/s (13.7MB/s)(69.8MiB/5346msec); 0 zone resets 00:13:42.919 slat (usec): min=14, max=1456, avg=68.31, stdev=147.48 00:13:42.919 clat (msec): min=153, max=897, avg=570.00, stdev=80.13 00:13:42.919 lat (msec): min=153, max=897, avg=570.07, stdev=80.13 00:13:42.919 clat percentiles (msec): 00:13:42.919 | 1.00th=[ 257], 5.00th=[ 422], 10.00th=[ 531], 20.00th=[ 558], 00:13:42.919 | 30.00th=[ 567], 40.00th=[ 575], 50.00th=[ 575], 60.00th=[ 584], 00:13:42.919 | 70.00th=[ 592], 80.00th=[ 592], 90.00th=[ 609], 95.00th=[ 634], 00:13:42.919 | 99.00th=[ 844], 99.50th=[ 877], 99.90th=[ 902], 99.95th=[ 902], 00:13:42.919 | 99.99th=[ 902] 00:13:42.919 bw ( KiB/s): min= 6912, max=13824, per=3.19%, avg=12743.20, stdev=2069.61, samples=10 00:13:42.919 iops : min= 54, max= 108, avg=99.40, stdev=16.11, samples=10 00:13:42.919 lat (msec) : 50=42.27%, 100=3.00%, 250=2.62%, 500=3.94%, 750=46.39% 00:13:42.919 lat (msec) : 1000=1.78% 00:13:42.919 cpu : usr=0.36%, sys=0.73%, ctx=642, majf=0, minf=1 00:13:42.919 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.1% 00:13:42.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.919 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.919 issued rwts: total=509,558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.919 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.919 job2: (groupid=0, jobs=1): err= 0: pid=70702: Thu Jul 25 17:04:34 2024 00:13:42.919 read: IOPS=105, BW=13.2MiB/s (13.9MB/s)(70.8MiB/5349msec) 00:13:42.919 slat (usec): min=6, max=1149, avg=39.97, stdev=86.77 00:13:42.919 clat (msec): min=26, max=350, avg=44.04, stdev=24.73 00:13:42.919 lat (msec): min=26, max=350, avg=44.08, stdev=24.73 00:13:42.919 clat percentiles (msec): 00:13:42.919 | 1.00th=[ 28], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 37], 00:13:42.919 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:13:42.919 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 45], 95.00th=[ 84], 00:13:42.919 | 99.00th=[ 161], 99.50th=[ 174], 99.90th=[ 351], 99.95th=[ 351], 00:13:42.919 | 99.99th=[ 351] 00:13:42.919 bw ( KiB/s): min=12263, max=17664, per=3.65%, avg=14458.80, stdev=1747.43, samples=10 00:13:42.919 iops : min= 95, max= 138, avg=112.80, stdev=13.84, samples=10 00:13:42.919 write: IOPS=104, BW=13.1MiB/s (13.7MB/s)(70.1MiB/5349msec); 0 zone resets 00:13:42.919 slat (usec): min=9, max=1127, avg=46.36, stdev=78.95 00:13:42.919 clat (msec): min=156, max=910, avg=564.94, stdev=77.04 00:13:42.919 lat (msec): min=157, max=910, avg=564.99, stdev=77.04 00:13:42.919 clat percentiles (msec): 00:13:42.919 | 1.00th=[ 257], 5.00th=[ 418], 10.00th=[ 523], 20.00th=[ 550], 00:13:42.919 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 575], 00:13:42.919 | 70.00th=[ 584], 80.00th=[ 592], 90.00th=[ 600], 95.00th=[ 642], 00:13:42.919 | 99.00th=[ 810], 99.50th=[ 844], 99.90th=[ 911], 99.95th=[ 911], 00:13:42.919 | 99.99th=[ 911] 00:13:42.919 bw ( KiB/s): min= 6912, max=13824, per=3.19%, avg=12743.20, stdev=2069.61, samples=10 00:13:42.919 iops : min= 54, max= 108, avg=99.40, stdev=16.11, samples=10 00:13:42.919 lat (msec) : 50=46.05%, 100=2.04%, 250=2.48%, 500=3.55%, 750=44.45% 00:13:42.919 lat (msec) : 1000=1.42% 00:13:42.919 cpu : usr=0.21%, sys=0.75%, ctx=656, majf=0, minf=1 00:13:42.919 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:13:42.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.919 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.919 issued rwts: total=566,561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.919 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.919 job3: (groupid=0, jobs=1): err= 0: pid=70724: Thu Jul 25 17:04:34 2024 00:13:42.919 read: IOPS=99, BW=12.4MiB/s (13.0MB/s)(66.1MiB/5338msec) 00:13:42.919 slat (usec): min=11, max=1329, avg=44.56, stdev=72.47 00:13:42.919 clat (msec): min=27, max=362, avg=46.80, stdev=35.69 00:13:42.919 lat (msec): min=27, max=362, avg=46.85, stdev=35.69 00:13:42.919 clat percentiles (msec): 00:13:42.919 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 37], 00:13:42.919 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:13:42.919 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 61], 95.00th=[ 103], 00:13:42.919 | 99.00th=[ 148], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:13:42.919 | 99.99th=[ 363] 00:13:42.919 bw ( KiB/s): min= 8686, max=20480, per=3.39%, avg=13409.80, stdev=3261.55, samples=10 00:13:42.919 iops : min= 67, max= 160, avg=104.60, stdev=25.62, samples=10 00:13:42.919 write: IOPS=104, BW=13.0MiB/s (13.7MB/s)(69.5MiB/5338msec); 0 zone resets 00:13:42.919 slat (usec): min=15, max=1165, avg=55.88, stdev=78.41 00:13:42.919 clat (msec): min=152, max=922, avg=569.11, stdev=80.92 00:13:42.919 lat (msec): min=152, max=922, avg=569.17, stdev=80.92 00:13:42.919 clat percentiles (msec): 00:13:42.919 | 1.00th=[ 245], 5.00th=[ 422], 10.00th=[ 535], 20.00th=[ 550], 00:13:42.919 | 30.00th=[ 567], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 584], 00:13:42.919 | 70.00th=[ 584], 80.00th=[ 592], 90.00th=[ 609], 95.00th=[ 651], 00:13:42.919 | 99.00th=[ 869], 99.50th=[ 885], 99.90th=[ 919], 99.95th=[ 919], 00:13:42.919 | 99.99th=[ 919] 00:13:42.919 bw ( KiB/s): min= 7168, max=13796, per=3.19%, avg=12743.20, stdev=1982.97, samples=10 00:13:42.919 iops : min= 56, max= 107, avg=99.40, stdev=15.41, samples=10 00:13:42.919 lat (msec) : 50=43.32%, 100=2.95%, 250=2.58%, 500=3.87%, 750=45.71% 00:13:42.919 lat (msec) : 1000=1.57% 00:13:42.919 cpu : usr=0.30%, sys=0.73%, ctx=632, majf=0, minf=1 00:13:42.919 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.2% 00:13:42.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.919 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.919 issued rwts: total=529,556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.919 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.919 job4: (groupid=0, jobs=1): err= 0: pid=70747: Thu Jul 25 17:04:34 2024 00:13:42.919 read: IOPS=104, BW=13.1MiB/s (13.7MB/s)(69.8MiB/5330msec) 00:13:42.919 slat (usec): min=6, max=1308, avg=44.68, stdev=84.01 00:13:42.919 clat (msec): min=26, max=332, avg=46.43, stdev=26.76 00:13:42.919 lat (msec): min=26, max=332, avg=46.47, stdev=26.75 00:13:42.920 clat percentiles (msec): 00:13:42.920 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:13:42.920 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:13:42.920 | 70.00th=[ 41], 80.00th=[ 42], 90.00th=[ 72], 95.00th=[ 101], 00:13:42.920 | 99.00th=[ 133], 99.50th=[ 140], 99.90th=[ 334], 99.95th=[ 334], 00:13:42.920 | 99.99th=[ 334] 00:13:42.920 bw ( KiB/s): min=10752, max=26112, per=3.59%, avg=14210.40, stdev=4449.71, samples=10 00:13:42.920 iops : min= 84, max= 204, avg=111.00, stdev=34.77, samples=10 00:13:42.920 write: IOPS=105, BW=13.2MiB/s (13.8MB/s)(70.1MiB/5330msec); 0 zone resets 00:13:42.920 slat (usec): min=8, max=784, avg=50.39, stdev=68.51 00:13:42.920 clat (msec): min=137, max=890, avg=561.03, stdev=83.92 00:13:42.920 lat (msec): min=137, max=890, avg=561.09, stdev=83.93 00:13:42.920 clat percentiles (msec): 00:13:42.920 | 1.00th=[ 243], 5.00th=[ 401], 10.00th=[ 485], 20.00th=[ 550], 00:13:42.920 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 584], 00:13:42.920 | 70.00th=[ 584], 80.00th=[ 592], 90.00th=[ 600], 95.00th=[ 617], 00:13:42.920 | 99.00th=[ 835], 99.50th=[ 885], 99.90th=[ 894], 99.95th=[ 894], 00:13:42.920 | 99.99th=[ 894] 00:13:42.920 bw ( KiB/s): min= 7424, max=13851, per=3.21%, avg=12802.70, stdev=1913.55, samples=10 00:13:42.920 iops : min= 58, max= 108, avg=100.00, stdev=14.94, samples=10 00:13:42.920 lat (msec) : 50=43.34%, 100=3.93%, 250=3.04%, 500=5.45%, 750=42.81% 00:13:42.920 lat (msec) : 1000=1.43% 00:13:42.920 cpu : usr=0.41%, sys=0.75%, ctx=664, majf=0, minf=1 00:13:42.920 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.4% 00:13:42.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.920 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.920 issued rwts: total=558,561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.920 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.920 job5: (groupid=0, jobs=1): err= 0: pid=70748: Thu Jul 25 17:04:34 2024 00:13:42.920 read: IOPS=104, BW=13.1MiB/s (13.8MB/s)(70.4MiB/5362msec) 00:13:42.920 slat (usec): min=7, max=971, avg=44.37, stdev=66.44 00:13:42.920 clat (msec): min=9, max=378, avg=43.67, stdev=28.13 00:13:42.920 lat (msec): min=9, max=378, avg=43.71, stdev=28.13 00:13:42.920 clat percentiles (msec): 00:13:42.920 | 1.00th=[ 21], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 37], 00:13:42.920 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:13:42.920 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 44], 95.00th=[ 75], 00:13:42.920 | 99.00th=[ 174], 99.50th=[ 190], 99.90th=[ 380], 99.95th=[ 380], 00:13:42.920 | 99.99th=[ 380] 00:13:42.920 bw ( KiB/s): min= 9984, max=19968, per=3.63%, avg=14364.50, stdev=3367.73, samples=10 00:13:42.920 iops : min= 78, max= 156, avg=112.10, stdev=26.29, samples=10 00:13:42.920 write: IOPS=103, BW=13.0MiB/s (13.6MB/s)(69.6MiB/5362msec); 0 zone resets 00:13:42.920 slat (usec): min=13, max=4663, avg=63.49, stdev=212.59 00:13:42.920 clat (msec): min=148, max=929, avg=570.55, stdev=83.19 00:13:42.920 lat (msec): min=153, max=929, avg=570.61, stdev=83.16 00:13:42.920 clat percentiles (msec): 00:13:42.920 | 1.00th=[ 257], 5.00th=[ 439], 10.00th=[ 542], 20.00th=[ 558], 00:13:42.920 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 575], 00:13:42.920 | 70.00th=[ 584], 80.00th=[ 592], 90.00th=[ 609], 95.00th=[ 693], 00:13:42.920 | 99.00th=[ 885], 99.50th=[ 919], 99.90th=[ 927], 99.95th=[ 927], 00:13:42.920 | 99.99th=[ 927] 00:13:42.920 bw ( KiB/s): min= 6669, max=13796, per=3.18%, avg=12698.80, stdev=2138.08, samples=10 00:13:42.920 iops : min= 52, max= 107, avg=99.10, stdev=16.68, samples=10 00:13:42.920 lat (msec) : 10=0.18%, 20=0.18%, 50=45.80%, 100=2.68%, 250=1.70% 00:13:42.920 lat (msec) : 500=3.57%, 750=44.20%, 1000=1.70% 00:13:42.920 cpu : usr=0.26%, sys=0.84%, ctx=645, majf=0, minf=1 00:13:42.920 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.4% 00:13:42.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.920 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.920 issued rwts: total=563,557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.920 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.920 job6: (groupid=0, jobs=1): err= 0: pid=70749: Thu Jul 25 17:04:34 2024 00:13:42.920 read: IOPS=102, BW=12.8MiB/s (13.4MB/s)(68.5MiB/5358msec) 00:13:42.920 slat (usec): min=7, max=854, avg=39.86, stdev=61.30 00:13:42.920 clat (msec): min=10, max=394, avg=43.94, stdev=30.96 00:13:42.920 lat (msec): min=10, max=394, avg=43.98, stdev=30.96 00:13:42.920 clat percentiles (msec): 00:13:42.920 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 37], 00:13:42.920 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:13:42.920 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 45], 95.00th=[ 79], 00:13:42.920 | 99.00th=[ 165], 99.50th=[ 376], 99.90th=[ 397], 99.95th=[ 397], 00:13:42.920 | 99.99th=[ 397] 00:13:42.920 bw ( KiB/s): min= 9216, max=18725, per=3.52%, avg=13955.30, stdev=2622.81, samples=10 00:13:42.920 iops : min= 72, max= 146, avg=108.90, stdev=20.44, samples=10 00:13:42.920 write: IOPS=103, BW=13.0MiB/s (13.6MB/s)(69.5MiB/5358msec); 0 zone resets 00:13:42.920 slat (usec): min=18, max=5988, avg=57.01, stdev=260.25 00:13:42.920 clat (msec): min=126, max=916, avg=571.39, stdev=84.58 00:13:42.920 lat (msec): min=132, max=916, avg=571.45, stdev=84.53 00:13:42.920 clat percentiles (msec): 00:13:42.920 | 1.00th=[ 247], 5.00th=[ 414], 10.00th=[ 535], 20.00th=[ 550], 00:13:42.920 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 584], 00:13:42.920 | 70.00th=[ 592], 80.00th=[ 600], 90.00th=[ 609], 95.00th=[ 667], 00:13:42.920 | 99.00th=[ 877], 99.50th=[ 902], 99.90th=[ 919], 99.95th=[ 919], 00:13:42.920 | 99.99th=[ 919] 00:13:42.920 bw ( KiB/s): min= 6669, max=13796, per=3.18%, avg=12698.80, stdev=2138.08, samples=10 00:13:42.920 iops : min= 52, max= 107, avg=99.10, stdev=16.68, samples=10 00:13:42.920 lat (msec) : 20=0.36%, 50=45.20%, 100=2.63%, 250=1.72%, 500=3.53% 00:13:42.920 lat (msec) : 750=44.93%, 1000=1.63% 00:13:42.920 cpu : usr=0.26%, sys=0.71%, ctx=683, majf=0, minf=1 00:13:42.920 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:13:42.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.920 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.920 issued rwts: total=548,556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.920 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.920 job7: (groupid=0, jobs=1): err= 0: pid=70761: Thu Jul 25 17:04:34 2024 00:13:42.920 read: IOPS=93, BW=11.6MiB/s (12.2MB/s)(62.2MiB/5349msec) 00:13:42.920 slat (usec): min=6, max=537, avg=37.91, stdev=52.86 00:13:42.920 clat (msec): min=21, max=372, avg=45.89, stdev=31.30 00:13:42.920 lat (msec): min=21, max=372, avg=45.93, stdev=31.30 00:13:42.920 clat percentiles (msec): 00:13:42.920 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:13:42.920 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:13:42.920 | 70.00th=[ 40], 80.00th=[ 42], 90.00th=[ 65], 95.00th=[ 99], 00:13:42.920 | 99.00th=[ 146], 99.50th=[ 363], 99.90th=[ 372], 99.95th=[ 372], 00:13:42.920 | 99.99th=[ 372] 00:13:42.920 bw ( KiB/s): min= 7936, max=18944, per=3.20%, avg=12667.40, stdev=3293.74, samples=10 00:13:42.920 iops : min= 62, max= 148, avg=98.80, stdev=25.81, samples=10 00:13:42.920 write: IOPS=104, BW=13.1MiB/s (13.7MB/s)(69.9MiB/5349msec); 0 zone resets 00:13:42.920 slat (usec): min=9, max=806, avg=45.99, stdev=61.05 00:13:42.920 clat (msec): min=151, max=885, avg=570.67, stdev=80.51 00:13:42.920 lat (msec): min=151, max=885, avg=570.71, stdev=80.52 00:13:42.920 clat percentiles (msec): 00:13:42.920 | 1.00th=[ 243], 5.00th=[ 418], 10.00th=[ 535], 20.00th=[ 558], 00:13:42.920 | 30.00th=[ 567], 40.00th=[ 575], 50.00th=[ 575], 60.00th=[ 584], 00:13:42.920 | 70.00th=[ 592], 80.00th=[ 592], 90.00th=[ 609], 95.00th=[ 676], 00:13:42.920 | 99.00th=[ 852], 99.50th=[ 877], 99.90th=[ 885], 99.95th=[ 885], 00:13:42.920 | 99.99th=[ 885] 00:13:42.920 bw ( KiB/s): min= 7168, max=13796, per=3.20%, avg=12768.80, stdev=1992.76, samples=10 00:13:42.920 iops : min= 56, max= 107, avg=99.60, stdev=15.49, samples=10 00:13:42.920 lat (msec) : 50=41.72%, 100=3.12%, 250=2.55%, 500=3.69%, 750=47.49% 00:13:42.920 lat (msec) : 1000=1.42% 00:13:42.920 cpu : usr=0.22%, sys=0.73%, ctx=670, majf=0, minf=1 00:13:42.920 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:13:42.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.920 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.920 issued rwts: total=498,559,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.920 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.920 job8: (groupid=0, jobs=1): err= 0: pid=70779: Thu Jul 25 17:04:34 2024 00:13:42.920 read: IOPS=95, BW=11.9MiB/s (12.5MB/s)(63.9MiB/5360msec) 00:13:42.920 slat (usec): min=10, max=651, avg=41.41, stdev=40.88 00:13:42.920 clat (msec): min=7, max=385, avg=48.62, stdev=42.99 00:13:42.920 lat (msec): min=7, max=385, avg=48.66, stdev=42.99 00:13:42.920 clat percentiles (msec): 00:13:42.920 | 1.00th=[ 8], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 37], 00:13:42.920 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:13:42.920 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 54], 95.00th=[ 165], 00:13:42.920 | 99.00th=[ 182], 99.50th=[ 376], 99.90th=[ 384], 99.95th=[ 384], 00:13:42.920 | 99.99th=[ 384] 00:13:42.920 bw ( KiB/s): min= 8704, max=18688, per=3.28%, avg=12978.80, stdev=2575.79, samples=10 00:13:42.920 iops : min= 68, max= 146, avg=101.30, stdev=20.16, samples=10 00:13:42.920 write: IOPS=104, BW=13.0MiB/s (13.6MB/s)(69.8MiB/5360msec); 0 zone resets 00:13:42.920 slat (usec): min=13, max=6095, avg=64.47, stdev=261.68 00:13:42.920 clat (msec): min=27, max=917, avg=568.96, stdev=86.74 00:13:42.920 lat (msec): min=27, max=917, avg=569.02, stdev=86.70 00:13:42.920 clat percentiles (msec): 00:13:42.920 | 1.00th=[ 199], 5.00th=[ 435], 10.00th=[ 518], 20.00th=[ 558], 00:13:42.920 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 575], 00:13:42.920 | 70.00th=[ 584], 80.00th=[ 600], 90.00th=[ 617], 95.00th=[ 642], 00:13:42.920 | 99.00th=[ 869], 99.50th=[ 902], 99.90th=[ 919], 99.95th=[ 919], 00:13:42.920 | 99.99th=[ 919] 00:13:42.920 bw ( KiB/s): min= 7168, max=13796, per=3.20%, avg=12774.30, stdev=1991.54, samples=10 00:13:42.920 iops : min= 56, max= 107, avg=99.70, stdev=15.51, samples=10 00:13:42.921 lat (msec) : 10=0.75%, 20=0.56%, 50=41.63%, 100=1.96%, 250=3.18% 00:13:42.921 lat (msec) : 500=4.49%, 750=45.84%, 1000=1.59% 00:13:42.921 cpu : usr=0.26%, sys=0.78%, ctx=676, majf=0, minf=1 00:13:42.921 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.1% 00:13:42.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.921 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.921 issued rwts: total=511,558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.921 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.921 job9: (groupid=0, jobs=1): err= 0: pid=70844: Thu Jul 25 17:04:34 2024 00:13:42.921 read: IOPS=106, BW=13.3MiB/s (13.9MB/s)(71.2MiB/5356msec) 00:13:42.921 slat (usec): min=10, max=754, avg=32.51, stdev=44.19 00:13:42.921 clat (msec): min=7, max=382, avg=46.45, stdev=33.89 00:13:42.921 lat (msec): min=7, max=382, avg=46.48, stdev=33.89 00:13:42.921 clat percentiles (msec): 00:13:42.921 | 1.00th=[ 17], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 37], 00:13:42.921 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:13:42.921 | 70.00th=[ 40], 80.00th=[ 42], 90.00th=[ 59], 95.00th=[ 96], 00:13:42.921 | 99.00th=[ 167], 99.50th=[ 359], 99.90th=[ 384], 99.95th=[ 384], 00:13:42.921 | 99.99th=[ 384] 00:13:42.921 bw ( KiB/s): min=10986, max=31038, per=3.67%, avg=14522.10, stdev=5976.97, samples=10 00:13:42.921 iops : min= 85, max= 242, avg=113.30, stdev=46.60, samples=10 00:13:42.921 write: IOPS=103, BW=13.0MiB/s (13.6MB/s)(69.5MiB/5356msec); 0 zone resets 00:13:42.921 slat (usec): min=12, max=3425, avg=47.54, stdev=150.90 00:13:42.921 clat (msec): min=103, max=927, avg=567.63, stdev=85.58 00:13:42.921 lat (msec): min=106, max=927, avg=567.68, stdev=85.55 00:13:42.921 clat percentiles (msec): 00:13:42.921 | 1.00th=[ 249], 5.00th=[ 422], 10.00th=[ 510], 20.00th=[ 550], 00:13:42.921 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 575], 00:13:42.921 | 70.00th=[ 584], 80.00th=[ 592], 90.00th=[ 609], 95.00th=[ 676], 00:13:42.921 | 99.00th=[ 877], 99.50th=[ 911], 99.90th=[ 927], 99.95th=[ 927], 00:13:42.921 | 99.99th=[ 927] 00:13:42.921 bw ( KiB/s): min= 6669, max=13824, per=3.18%, avg=12698.80, stdev=2141.48, samples=10 00:13:42.921 iops : min= 52, max= 108, avg=99.10, stdev=16.71, samples=10 00:13:42.921 lat (msec) : 10=0.18%, 20=0.53%, 50=44.05%, 100=3.55%, 250=2.58% 00:13:42.921 lat (msec) : 500=4.17%, 750=43.25%, 1000=1.69% 00:13:42.921 cpu : usr=0.28%, sys=0.63%, ctx=677, majf=0, minf=1 00:13:42.921 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:13:42.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.921 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.921 issued rwts: total=570,556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.921 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.921 job10: (groupid=0, jobs=1): err= 0: pid=70889: Thu Jul 25 17:04:34 2024 00:13:42.921 read: IOPS=108, BW=13.5MiB/s (14.2MB/s)(72.1MiB/5334msec) 00:13:42.921 slat (usec): min=7, max=685, avg=33.78, stdev=39.46 00:13:42.921 clat (msec): min=26, max=357, avg=46.29, stdev=32.73 00:13:42.921 lat (msec): min=26, max=357, avg=46.33, stdev=32.72 00:13:42.921 clat percentiles (msec): 00:13:42.921 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 37], 00:13:42.921 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:13:42.921 | 70.00th=[ 40], 80.00th=[ 42], 90.00th=[ 55], 95.00th=[ 108], 00:13:42.921 | 99.00th=[ 140], 99.50th=[ 347], 99.90th=[ 359], 99.95th=[ 359], 00:13:42.921 | 99.99th=[ 359] 00:13:42.921 bw ( KiB/s): min=10475, max=23296, per=3.70%, avg=14661.20, stdev=3938.54, samples=10 00:13:42.921 iops : min= 81, max= 182, avg=114.30, stdev=30.93, samples=10 00:13:42.921 write: IOPS=104, BW=13.1MiB/s (13.7MB/s)(69.9MiB/5334msec); 0 zone resets 00:13:42.921 slat (usec): min=12, max=527, avg=41.47, stdev=39.12 00:13:42.921 clat (msec): min=144, max=889, avg=562.15, stdev=80.95 00:13:42.921 lat (msec): min=144, max=889, avg=562.19, stdev=80.95 00:13:42.921 clat percentiles (msec): 00:13:42.921 | 1.00th=[ 251], 5.00th=[ 405], 10.00th=[ 498], 20.00th=[ 550], 00:13:42.921 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 567], 60.00th=[ 575], 00:13:42.921 | 70.00th=[ 584], 80.00th=[ 592], 90.00th=[ 600], 95.00th=[ 642], 00:13:42.921 | 99.00th=[ 827], 99.50th=[ 877], 99.90th=[ 885], 99.95th=[ 885], 00:13:42.921 | 99.99th=[ 885] 00:13:42.921 bw ( KiB/s): min= 7424, max=13824, per=3.20%, avg=12791.60, stdev=1912.03, samples=10 00:13:42.921 iops : min= 58, max= 108, avg=99.70, stdev=14.84, samples=10 00:13:42.921 lat (msec) : 50=45.60%, 100=2.20%, 250=3.17%, 500=4.84%, 750=42.69% 00:13:42.921 lat (msec) : 1000=1.50% 00:13:42.921 cpu : usr=0.19%, sys=0.79%, ctx=668, majf=0, minf=1 00:13:42.921 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:13:42.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.921 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.921 issued rwts: total=577,559,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.921 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.921 job11: (groupid=0, jobs=1): err= 0: pid=70908: Thu Jul 25 17:04:34 2024 00:13:42.921 read: IOPS=105, BW=13.2MiB/s (13.9MB/s)(70.8MiB/5347msec) 00:13:42.921 slat (usec): min=7, max=1543, avg=41.38, stdev=82.38 00:13:42.921 clat (msec): min=27, max=373, avg=47.59, stdev=34.50 00:13:42.921 lat (msec): min=27, max=373, avg=47.64, stdev=34.50 00:13:42.921 clat percentiles (msec): 00:13:42.921 | 1.00th=[ 28], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 37], 00:13:42.921 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:13:42.921 | 70.00th=[ 40], 80.00th=[ 42], 90.00th=[ 71], 95.00th=[ 112], 00:13:42.921 | 99.00th=[ 144], 99.50th=[ 363], 99.90th=[ 372], 99.95th=[ 372], 00:13:42.921 | 99.99th=[ 372] 00:13:42.921 bw ( KiB/s): min= 8704, max=22272, per=3.63%, avg=14381.00, stdev=4914.62, samples=10 00:13:42.921 iops : min= 68, max= 174, avg=112.20, stdev=38.39, samples=10 00:13:42.921 write: IOPS=104, BW=13.0MiB/s (13.7MB/s)(69.8MiB/5347msec); 0 zone resets 00:13:42.921 slat (usec): min=13, max=636, avg=45.13, stdev=32.22 00:13:42.921 clat (msec): min=165, max=913, avg=563.99, stdev=83.22 00:13:42.921 lat (msec): min=165, max=913, avg=564.03, stdev=83.22 00:13:42.921 clat percentiles (msec): 00:13:42.921 | 1.00th=[ 249], 5.00th=[ 414], 10.00th=[ 481], 20.00th=[ 542], 00:13:42.921 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 584], 00:13:42.921 | 70.00th=[ 584], 80.00th=[ 592], 90.00th=[ 600], 95.00th=[ 634], 00:13:42.921 | 99.00th=[ 860], 99.50th=[ 894], 99.90th=[ 911], 99.95th=[ 911], 00:13:42.921 | 99.99th=[ 911] 00:13:42.921 bw ( KiB/s): min= 7168, max=13824, per=3.20%, avg=12768.80, stdev=1989.50, samples=10 00:13:42.921 iops : min= 56, max= 108, avg=99.60, stdev=15.48, samples=10 00:13:42.921 lat (msec) : 50=43.68%, 100=3.20%, 250=3.65%, 500=5.25%, 750=42.79% 00:13:42.921 lat (msec) : 1000=1.42% 00:13:42.921 cpu : usr=0.45%, sys=0.79%, ctx=662, majf=0, minf=1 00:13:42.921 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:13:42.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.921 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.921 issued rwts: total=566,558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.921 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.921 job12: (groupid=0, jobs=1): err= 0: pid=70909: Thu Jul 25 17:04:34 2024 00:13:42.921 read: IOPS=109, BW=13.7MiB/s (14.3MB/s)(73.0MiB/5344msec) 00:13:42.921 slat (usec): min=8, max=485, avg=37.38, stdev=42.92 00:13:42.921 clat (msec): min=26, max=365, avg=46.89, stdev=32.07 00:13:42.921 lat (msec): min=26, max=365, avg=46.93, stdev=32.07 00:13:42.921 clat percentiles (msec): 00:13:42.921 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 37], 00:13:42.921 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:13:42.921 | 70.00th=[ 41], 80.00th=[ 42], 90.00th=[ 64], 95.00th=[ 113], 00:13:42.921 | 99.00th=[ 161], 99.50th=[ 355], 99.90th=[ 368], 99.95th=[ 368], 00:13:42.921 | 99.99th=[ 368] 00:13:42.921 bw ( KiB/s): min= 9472, max=24112, per=3.75%, avg=14847.50, stdev=4213.65, samples=10 00:13:42.921 iops : min= 74, max= 188, avg=115.80, stdev=32.91, samples=10 00:13:42.921 write: IOPS=104, BW=13.1MiB/s (13.7MB/s)(69.8MiB/5344msec); 0 zone resets 00:13:42.921 slat (usec): min=14, max=751, avg=49.08, stdev=65.41 00:13:42.921 clat (msec): min=156, max=896, avg=563.05, stdev=81.73 00:13:42.921 lat (msec): min=156, max=896, avg=563.09, stdev=81.74 00:13:42.921 clat percentiles (msec): 00:13:42.921 | 1.00th=[ 264], 5.00th=[ 435], 10.00th=[ 493], 20.00th=[ 550], 00:13:42.921 | 30.00th=[ 558], 40.00th=[ 558], 50.00th=[ 567], 60.00th=[ 575], 00:13:42.921 | 70.00th=[ 584], 80.00th=[ 592], 90.00th=[ 600], 95.00th=[ 651], 00:13:42.921 | 99.00th=[ 869], 99.50th=[ 885], 99.90th=[ 894], 99.95th=[ 894], 00:13:42.921 | 99.99th=[ 894] 00:13:42.921 bw ( KiB/s): min= 6925, max=13824, per=3.19%, avg=12744.50, stdev=2065.54, samples=10 00:13:42.921 iops : min= 54, max= 108, avg=99.40, stdev=16.11, samples=10 00:13:42.921 lat (msec) : 50=44.83%, 100=3.06%, 250=3.42%, 500=5.08%, 750=41.86% 00:13:42.921 lat (msec) : 1000=1.75% 00:13:42.921 cpu : usr=0.39%, sys=0.58%, ctx=844, majf=0, minf=1 00:13:42.921 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:13:42.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.921 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.921 issued rwts: total=584,558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.921 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.921 job13: (groupid=0, jobs=1): err= 0: pid=70910: Thu Jul 25 17:04:34 2024 00:13:42.921 read: IOPS=105, BW=13.2MiB/s (13.8MB/s)(70.2MiB/5333msec) 00:13:42.921 slat (usec): min=6, max=1190, avg=58.75, stdev=122.72 00:13:42.921 clat (msec): min=26, max=362, avg=46.04, stdev=28.16 00:13:42.921 lat (msec): min=26, max=362, avg=46.10, stdev=28.15 00:13:42.921 clat percentiles (msec): 00:13:42.921 | 1.00th=[ 29], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 37], 00:13:42.921 | 30.00th=[ 37], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:13:42.921 | 70.00th=[ 40], 80.00th=[ 42], 90.00th=[ 65], 95.00th=[ 106], 00:13:42.921 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 363], 99.95th=[ 363], 00:13:42.921 | 99.99th=[ 363] 00:13:42.921 bw ( KiB/s): min= 9984, max=23296, per=3.62%, avg=14324.50, stdev=3816.86, samples=10 00:13:42.922 iops : min= 78, max= 182, avg=111.70, stdev=29.85, samples=10 00:13:42.922 write: IOPS=105, BW=13.1MiB/s (13.8MB/s)(70.0MiB/5333msec); 0 zone resets 00:13:42.922 slat (usec): min=12, max=2254, avg=79.28, stdev=189.81 00:13:42.922 clat (msec): min=145, max=864, avg=562.45, stdev=81.41 00:13:42.922 lat (msec): min=145, max=864, avg=562.53, stdev=81.41 00:13:42.922 clat percentiles (msec): 00:13:42.922 | 1.00th=[ 247], 5.00th=[ 414], 10.00th=[ 498], 20.00th=[ 550], 00:13:42.922 | 30.00th=[ 558], 40.00th=[ 558], 50.00th=[ 567], 60.00th=[ 575], 00:13:42.922 | 70.00th=[ 584], 80.00th=[ 592], 90.00th=[ 600], 95.00th=[ 625], 00:13:42.922 | 99.00th=[ 844], 99.50th=[ 852], 99.90th=[ 869], 99.95th=[ 869], 00:13:42.922 | 99.99th=[ 869] 00:13:42.922 bw ( KiB/s): min= 7168, max=13824, per=3.20%, avg=12763.60, stdev=1989.15, samples=10 00:13:42.922 iops : min= 56, max= 108, avg=99.50, stdev=15.48, samples=10 00:13:42.922 lat (msec) : 50=43.85%, 100=3.65%, 250=2.94%, 500=4.63%, 750=43.49% 00:13:42.922 lat (msec) : 1000=1.43% 00:13:42.922 cpu : usr=0.43%, sys=0.71%, ctx=839, majf=0, minf=1 00:13:42.922 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.4% 00:13:42.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.922 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.922 issued rwts: total=562,560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.922 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.922 job14: (groupid=0, jobs=1): err= 0: pid=70911: Thu Jul 25 17:04:34 2024 00:13:42.922 read: IOPS=101, BW=12.6MiB/s (13.3MB/s)(67.8MiB/5357msec) 00:13:42.922 slat (usec): min=7, max=480, avg=38.98, stdev=30.36 00:13:42.922 clat (msec): min=8, max=386, avg=44.57, stdev=32.12 00:13:42.922 lat (msec): min=8, max=386, avg=44.61, stdev=32.12 00:13:42.922 clat percentiles (msec): 00:13:42.922 | 1.00th=[ 20], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 37], 00:13:42.922 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:13:42.922 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 43], 95.00th=[ 83], 00:13:42.922 | 99.00th=[ 186], 99.50th=[ 359], 99.90th=[ 388], 99.95th=[ 388], 00:13:42.922 | 99.99th=[ 388] 00:13:42.922 bw ( KiB/s): min=11008, max=17373, per=3.48%, avg=13789.20, stdev=1966.07, samples=10 00:13:42.922 iops : min= 86, max= 135, avg=107.50, stdev=15.22, samples=10 00:13:42.922 write: IOPS=103, BW=13.0MiB/s (13.6MB/s)(69.6MiB/5357msec); 0 zone resets 00:13:42.922 slat (usec): min=9, max=594, avg=47.75, stdev=45.06 00:13:42.922 clat (msec): min=156, max=908, avg=571.31, stdev=80.94 00:13:42.922 lat (msec): min=156, max=908, avg=571.36, stdev=80.95 00:13:42.922 clat percentiles (msec): 00:13:42.922 | 1.00th=[ 257], 5.00th=[ 439], 10.00th=[ 542], 20.00th=[ 558], 00:13:42.922 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 575], 00:13:42.922 | 70.00th=[ 584], 80.00th=[ 592], 90.00th=[ 617], 95.00th=[ 684], 00:13:42.922 | 99.00th=[ 860], 99.50th=[ 877], 99.90th=[ 911], 99.95th=[ 911], 00:13:42.922 | 99.99th=[ 911] 00:13:42.922 bw ( KiB/s): min= 6656, max=13824, per=3.19%, avg=12714.80, stdev=2151.59, samples=10 00:13:42.922 iops : min= 52, max= 108, avg=99.10, stdev=16.70, samples=10 00:13:42.922 lat (msec) : 10=0.09%, 20=0.45%, 50=44.86%, 100=2.27%, 250=1.73% 00:13:42.922 lat (msec) : 500=3.46%, 750=45.31%, 1000=1.82% 00:13:42.922 cpu : usr=0.37%, sys=0.78%, ctx=638, majf=0, minf=1 00:13:42.922 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.3% 00:13:42.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.922 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.922 issued rwts: total=542,557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.922 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.922 job15: (groupid=0, jobs=1): err= 0: pid=70913: Thu Jul 25 17:04:34 2024 00:13:42.922 read: IOPS=100, BW=12.5MiB/s (13.1MB/s)(66.6MiB/5329msec) 00:13:42.922 slat (usec): min=5, max=2110, avg=53.09, stdev=135.49 00:13:42.922 clat (msec): min=26, max=364, avg=48.72, stdev=39.21 00:13:42.922 lat (msec): min=26, max=364, avg=48.77, stdev=39.20 00:13:42.922 clat percentiles (msec): 00:13:42.922 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 37], 00:13:42.922 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:13:42.922 | 70.00th=[ 41], 80.00th=[ 42], 90.00th=[ 78], 95.00th=[ 126], 00:13:42.922 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 363], 99.95th=[ 363], 00:13:42.922 | 99.99th=[ 363] 00:13:42.922 bw ( KiB/s): min= 8192, max=19968, per=3.40%, avg=13479.70, stdev=3838.84, samples=10 00:13:42.922 iops : min= 64, max= 156, avg=105.00, stdev=29.95, samples=10 00:13:42.922 write: IOPS=103, BW=13.0MiB/s (13.6MB/s)(69.2MiB/5329msec); 0 zone resets 00:13:42.922 slat (usec): min=8, max=1350, avg=58.76, stdev=100.76 00:13:42.922 clat (msec): min=157, max=896, avg=567.92, stdev=80.34 00:13:42.922 lat (msec): min=157, max=896, avg=567.98, stdev=80.35 00:13:42.922 clat percentiles (msec): 00:13:42.922 | 1.00th=[ 257], 5.00th=[ 435], 10.00th=[ 506], 20.00th=[ 550], 00:13:42.922 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 584], 00:13:42.922 | 70.00th=[ 584], 80.00th=[ 600], 90.00th=[ 617], 95.00th=[ 642], 00:13:42.922 | 99.00th=[ 860], 99.50th=[ 877], 99.90th=[ 894], 99.95th=[ 894], 00:13:42.922 | 99.99th=[ 894] 00:13:42.922 bw ( KiB/s): min= 6912, max=13796, per=3.18%, avg=12712.20, stdev=2057.75, samples=10 00:13:42.922 iops : min= 54, max= 107, avg=99.00, stdev=15.95, samples=10 00:13:42.922 lat (msec) : 50=42.96%, 100=3.13%, 250=2.85%, 500=4.78%, 750=44.80% 00:13:42.922 lat (msec) : 1000=1.47% 00:13:42.922 cpu : usr=0.24%, sys=0.75%, ctx=846, majf=0, minf=1 00:13:42.922 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.2% 00:13:42.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.922 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.922 issued rwts: total=533,554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.922 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.922 job16: (groupid=0, jobs=1): err= 0: pid=70914: Thu Jul 25 17:04:34 2024 00:13:42.922 read: IOPS=111, BW=13.9MiB/s (14.6MB/s)(74.8MiB/5361msec) 00:13:42.922 slat (usec): min=7, max=397, avg=33.18, stdev=22.72 00:13:42.922 clat (msec): min=10, max=381, avg=45.30, stdev=34.20 00:13:42.922 lat (msec): min=10, max=381, avg=45.34, stdev=34.20 00:13:42.922 clat percentiles (msec): 00:13:42.922 | 1.00th=[ 14], 5.00th=[ 30], 10.00th=[ 36], 20.00th=[ 37], 00:13:42.922 | 30.00th=[ 38], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 39], 00:13:42.922 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 50], 95.00th=[ 85], 00:13:42.922 | 99.00th=[ 176], 99.50th=[ 363], 99.90th=[ 380], 99.95th=[ 380], 00:13:42.922 | 99.99th=[ 380] 00:13:42.922 bw ( KiB/s): min=11520, max=27190, per=3.84%, avg=15211.70, stdev=4719.17, samples=10 00:13:42.922 iops : min= 90, max= 212, avg=118.70, stdev=36.81, samples=10 00:13:42.922 write: IOPS=104, BW=13.0MiB/s (13.6MB/s)(69.8MiB/5361msec); 0 zone resets 00:13:42.922 slat (usec): min=12, max=163, avg=39.90, stdev=15.73 00:13:42.922 clat (msec): min=71, max=912, avg=565.49, stdev=84.07 00:13:42.922 lat (msec): min=71, max=912, avg=565.53, stdev=84.07 00:13:42.922 clat percentiles (msec): 00:13:42.922 | 1.00th=[ 226], 5.00th=[ 418], 10.00th=[ 518], 20.00th=[ 558], 00:13:42.922 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 575], 00:13:42.922 | 70.00th=[ 584], 80.00th=[ 584], 90.00th=[ 600], 95.00th=[ 659], 00:13:42.922 | 99.00th=[ 894], 99.50th=[ 911], 99.90th=[ 911], 99.95th=[ 911], 00:13:42.922 | 99.99th=[ 911] 00:13:42.922 bw ( KiB/s): min= 6925, max=13824, per=3.19%, avg=12750.00, stdev=2067.96, samples=10 00:13:42.922 iops : min= 54, max= 108, avg=99.50, stdev=16.15, samples=10 00:13:42.922 lat (msec) : 20=0.95%, 50=45.76%, 100=2.77%, 250=2.51%, 500=4.07% 00:13:42.922 lat (msec) : 750=42.65%, 1000=1.30% 00:13:42.922 cpu : usr=0.43%, sys=0.71%, ctx=642, majf=0, minf=1 00:13:42.922 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.6% 00:13:42.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.922 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.922 issued rwts: total=598,558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.922 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.922 job17: (groupid=0, jobs=1): err= 0: pid=70915: Thu Jul 25 17:04:34 2024 00:13:42.922 read: IOPS=98, BW=12.3MiB/s (12.9MB/s)(65.6MiB/5348msec) 00:13:42.922 slat (usec): min=6, max=224, avg=37.24, stdev=24.02 00:13:42.922 clat (msec): min=26, max=358, avg=45.48, stdev=24.22 00:13:42.922 lat (msec): min=26, max=358, avg=45.52, stdev=24.22 00:13:42.922 clat percentiles (msec): 00:13:42.922 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:13:42.922 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:13:42.922 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 71], 95.00th=[ 100], 00:13:42.922 | 99.00th=[ 131], 99.50th=[ 144], 99.90th=[ 359], 99.95th=[ 359], 00:13:42.922 | 99.99th=[ 359] 00:13:42.922 bw ( KiB/s): min= 8704, max=23599, per=3.38%, avg=13388.60, stdev=4285.98, samples=10 00:13:42.922 iops : min= 68, max= 184, avg=104.40, stdev=33.45, samples=10 00:13:42.922 write: IOPS=104, BW=13.1MiB/s (13.7MB/s)(70.0MiB/5348msec); 0 zone resets 00:13:42.922 slat (usec): min=11, max=216, avg=43.56, stdev=21.28 00:13:42.922 clat (msec): min=149, max=908, avg=567.71, stdev=81.51 00:13:42.922 lat (msec): min=149, max=909, avg=567.75, stdev=81.51 00:13:42.922 clat percentiles (msec): 00:13:42.922 | 1.00th=[ 249], 5.00th=[ 409], 10.00th=[ 514], 20.00th=[ 550], 00:13:42.922 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 584], 00:13:42.922 | 70.00th=[ 592], 80.00th=[ 592], 90.00th=[ 609], 95.00th=[ 642], 00:13:42.922 | 99.00th=[ 844], 99.50th=[ 877], 99.90th=[ 911], 99.95th=[ 911], 00:13:42.922 | 99.99th=[ 911] 00:13:42.922 bw ( KiB/s): min= 6925, max=13824, per=3.19%, avg=12744.50, stdev=2065.54, samples=10 00:13:42.922 iops : min= 54, max= 108, avg=99.40, stdev=16.11, samples=10 00:13:42.922 lat (msec) : 50=42.40%, 100=3.78%, 250=2.67%, 500=3.87%, 750=45.90% 00:13:42.922 lat (msec) : 1000=1.38% 00:13:42.922 cpu : usr=0.24%, sys=0.80%, ctx=625, majf=0, minf=1 00:13:42.922 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.2% 00:13:42.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.922 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.922 issued rwts: total=525,560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.923 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.923 job18: (groupid=0, jobs=1): err= 0: pid=70916: Thu Jul 25 17:04:34 2024 00:13:42.923 read: IOPS=104, BW=13.1MiB/s (13.8MB/s)(69.9MiB/5325msec) 00:13:42.923 slat (usec): min=5, max=194, avg=36.77, stdev=19.00 00:13:42.923 clat (msec): min=26, max=353, avg=46.95, stdev=33.78 00:13:42.923 lat (msec): min=26, max=353, avg=46.99, stdev=33.78 00:13:42.923 clat percentiles (msec): 00:13:42.923 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 37], 00:13:42.923 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:13:42.923 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 66], 95.00th=[ 105], 00:13:42.923 | 99.00th=[ 138], 99.50th=[ 334], 99.90th=[ 355], 99.95th=[ 355], 00:13:42.923 | 99.99th=[ 355] 00:13:42.923 bw ( KiB/s): min=11776, max=19751, per=3.58%, avg=14163.50, stdev=2297.15, samples=10 00:13:42.923 iops : min= 92, max= 154, avg=110.60, stdev=17.86, samples=10 00:13:42.923 write: IOPS=104, BW=13.1MiB/s (13.7MB/s)(69.8MiB/5325msec); 0 zone resets 00:13:42.923 slat (usec): min=8, max=648, avg=43.99, stdev=34.99 00:13:42.923 clat (msec): min=143, max=864, avg=562.99, stdev=78.23 00:13:42.923 lat (msec): min=143, max=864, avg=563.03, stdev=78.23 00:13:42.923 clat percentiles (msec): 00:13:42.923 | 1.00th=[ 243], 5.00th=[ 414], 10.00th=[ 514], 20.00th=[ 550], 00:13:42.923 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 567], 60.00th=[ 575], 00:13:42.923 | 70.00th=[ 584], 80.00th=[ 592], 90.00th=[ 609], 95.00th=[ 651], 00:13:42.923 | 99.00th=[ 827], 99.50th=[ 852], 99.90th=[ 869], 99.95th=[ 869], 00:13:42.923 | 99.99th=[ 869] 00:13:42.923 bw ( KiB/s): min= 7438, max=13824, per=3.21%, avg=12804.10, stdev=1908.77, samples=10 00:13:42.923 iops : min= 58, max= 108, avg=100.00, stdev=14.94, samples=10 00:13:42.923 lat (msec) : 50=44.23%, 100=2.95%, 250=2.95%, 500=4.39%, 750=44.14% 00:13:42.923 lat (msec) : 1000=1.34% 00:13:42.923 cpu : usr=0.32%, sys=0.77%, ctx=622, majf=0, minf=1 00:13:42.923 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.4% 00:13:42.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.923 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.923 issued rwts: total=559,558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.923 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.923 job19: (groupid=0, jobs=1): err= 0: pid=70917: Thu Jul 25 17:04:34 2024 00:13:42.923 read: IOPS=116, BW=14.6MiB/s (15.3MB/s)(78.0MiB/5347msec) 00:13:42.923 slat (usec): min=8, max=490, avg=36.87, stdev=34.35 00:13:42.923 clat (msec): min=20, max=367, avg=47.12, stdev=37.03 00:13:42.923 lat (msec): min=20, max=367, avg=47.15, stdev=37.03 00:13:42.923 clat percentiles (msec): 00:13:42.923 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 37], 00:13:42.923 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:13:42.923 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 58], 95.00th=[ 96], 00:13:42.923 | 99.00th=[ 176], 99.50th=[ 355], 99.90th=[ 368], 99.95th=[ 368], 00:13:42.923 | 99.99th=[ 368] 00:13:42.923 bw ( KiB/s): min=11520, max=23760, per=3.99%, avg=15810.20, stdev=3455.52, samples=10 00:13:42.923 iops : min= 90, max= 185, avg=123.30, stdev=26.92, samples=10 00:13:42.923 write: IOPS=103, BW=13.0MiB/s (13.6MB/s)(69.4MiB/5347msec); 0 zone resets 00:13:42.923 slat (usec): min=12, max=3910, avg=51.07, stdev=167.42 00:13:42.923 clat (msec): min=160, max=880, avg=562.27, stdev=78.47 00:13:42.923 lat (msec): min=164, max=880, avg=562.32, stdev=78.44 00:13:42.923 clat percentiles (msec): 00:13:42.923 | 1.00th=[ 259], 5.00th=[ 426], 10.00th=[ 518], 20.00th=[ 542], 00:13:42.923 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 567], 60.00th=[ 575], 00:13:42.923 | 70.00th=[ 584], 80.00th=[ 584], 90.00th=[ 600], 95.00th=[ 625], 00:13:42.923 | 99.00th=[ 844], 99.50th=[ 869], 99.90th=[ 877], 99.95th=[ 877], 00:13:42.923 | 99.99th=[ 877] 00:13:42.923 bw ( KiB/s): min= 6642, max=13824, per=3.19%, avg=12741.80, stdev=2166.53, samples=10 00:13:42.923 iops : min= 51, max= 108, avg=99.30, stdev=17.13, samples=10 00:13:42.923 lat (msec) : 50=46.99%, 100=3.31%, 250=2.54%, 500=3.82%, 750=41.82% 00:13:42.923 lat (msec) : 1000=1.53% 00:13:42.923 cpu : usr=0.41%, sys=0.73%, ctx=667, majf=0, minf=1 00:13:42.923 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.7% 00:13:42.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.923 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.923 issued rwts: total=624,555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.923 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.923 job20: (groupid=0, jobs=1): err= 0: pid=70918: Thu Jul 25 17:04:34 2024 00:13:42.923 read: IOPS=104, BW=13.1MiB/s (13.8MB/s)(70.5MiB/5376msec) 00:13:42.923 slat (usec): min=10, max=6059, avg=47.56, stdev=255.52 00:13:42.923 clat (usec): min=1240, max=395992, avg=42273.28, stdev=27740.71 00:13:42.923 lat (usec): min=1255, max=396048, avg=42320.84, stdev=27754.24 00:13:42.923 clat percentiles (usec): 00:13:42.923 | 1.00th=[ 1745], 5.00th=[ 28967], 10.00th=[ 35390], 20.00th=[ 36439], 00:13:42.923 | 30.00th=[ 36963], 40.00th=[ 37487], 50.00th=[ 38011], 60.00th=[ 38536], 00:13:42.923 | 70.00th=[ 39060], 80.00th=[ 40109], 90.00th=[ 42730], 95.00th=[ 73925], 00:13:42.923 | 99.00th=[156238], 99.50th=[170918], 99.90th=[396362], 99.95th=[396362], 00:13:42.923 | 99.99th=[396362] 00:13:42.923 bw ( KiB/s): min= 9453, max=20224, per=3.62%, avg=14356.90, stdev=2765.68, samples=10 00:13:42.923 iops : min= 73, max= 158, avg=112.00, stdev=21.80, samples=10 00:13:42.923 write: IOPS=106, BW=13.3MiB/s (14.0MB/s)(71.8MiB/5376msec); 0 zone resets 00:13:42.923 slat (usec): min=14, max=287, avg=42.45, stdev=20.58 00:13:42.923 clat (usec): min=1928, max=927874, avg=556964.13, stdev=126080.12 00:13:42.923 lat (msec): min=2, max=927, avg=557.01, stdev=126.08 00:13:42.923 clat percentiles (msec): 00:13:42.923 | 1.00th=[ 11], 5.00th=[ 351], 10.00th=[ 493], 20.00th=[ 550], 00:13:42.923 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 584], 00:13:42.923 | 70.00th=[ 592], 80.00th=[ 592], 90.00th=[ 617], 95.00th=[ 684], 00:13:42.923 | 99.00th=[ 869], 99.50th=[ 911], 99.90th=[ 927], 99.95th=[ 927], 00:13:42.923 | 99.99th=[ 927] 00:13:42.923 bw ( KiB/s): min=11008, max=13824, per=3.29%, avg=13127.20, stdev=832.41, samples=10 00:13:42.923 iops : min= 86, max= 108, avg=102.40, stdev= 6.40, samples=10 00:13:42.923 lat (msec) : 2=0.62%, 10=0.35%, 20=2.46%, 50=44.20%, 100=2.28% 00:13:42.923 lat (msec) : 250=1.49%, 500=3.25%, 750=43.76%, 1000=1.58% 00:13:42.923 cpu : usr=0.32%, sys=0.89%, ctx=618, majf=0, minf=1 00:13:42.923 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:13:42.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.923 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.923 issued rwts: total=564,574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.923 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.923 job21: (groupid=0, jobs=1): err= 0: pid=70919: Thu Jul 25 17:04:34 2024 00:13:42.923 read: IOPS=95, BW=12.0MiB/s (12.5MB/s)(63.9MiB/5341msec) 00:13:42.923 slat (usec): min=6, max=3979, avg=35.43, stdev=175.27 00:13:42.923 clat (msec): min=28, max=365, avg=46.28, stdev=33.62 00:13:42.923 lat (msec): min=28, max=365, avg=46.32, stdev=33.62 00:13:42.923 clat percentiles (msec): 00:13:42.923 | 1.00th=[ 30], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:13:42.923 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:13:42.923 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 59], 95.00th=[ 91], 00:13:42.923 | 99.00th=[ 140], 99.50th=[ 355], 99.90th=[ 368], 99.95th=[ 368], 00:13:42.923 | 99.99th=[ 368] 00:13:42.923 bw ( KiB/s): min= 8960, max=17152, per=3.28%, avg=12974.20, stdev=2603.53, samples=10 00:13:42.923 iops : min= 70, max= 134, avg=101.20, stdev=20.38, samples=10 00:13:42.923 write: IOPS=104, BW=13.0MiB/s (13.7MB/s)(69.6MiB/5341msec); 0 zone resets 00:13:42.924 slat (nsec): min=9926, max=81178, avg=32763.31, stdev=11641.70 00:13:42.924 clat (msec): min=149, max=926, avg=570.47, stdev=82.51 00:13:42.924 lat (msec): min=149, max=926, avg=570.51, stdev=82.51 00:13:42.924 clat percentiles (msec): 00:13:42.924 | 1.00th=[ 251], 5.00th=[ 418], 10.00th=[ 542], 20.00th=[ 558], 00:13:42.924 | 30.00th=[ 567], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 575], 00:13:42.924 | 70.00th=[ 584], 80.00th=[ 600], 90.00th=[ 617], 95.00th=[ 642], 00:13:42.924 | 99.00th=[ 877], 99.50th=[ 911], 99.90th=[ 927], 99.95th=[ 927], 00:13:42.924 | 99.99th=[ 927] 00:13:42.924 bw ( KiB/s): min= 7168, max=13796, per=3.19%, avg=12743.20, stdev=1979.29, samples=10 00:13:42.924 iops : min= 56, max= 107, avg=99.40, stdev=15.39, samples=10 00:13:42.924 lat (msec) : 50=42.88%, 100=2.72%, 250=2.43%, 500=3.75%, 750=46.54% 00:13:42.924 lat (msec) : 1000=1.69% 00:13:42.924 cpu : usr=0.22%, sys=0.60%, ctx=610, majf=0, minf=1 00:13:42.924 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.1% 00:13:42.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.924 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.924 issued rwts: total=511,557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.924 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.924 job22: (groupid=0, jobs=1): err= 0: pid=70920: Thu Jul 25 17:04:34 2024 00:13:42.924 read: IOPS=116, BW=14.6MiB/s (15.3MB/s)(78.2MiB/5351msec) 00:13:42.924 slat (usec): min=6, max=1984, avg=43.33, stdev=94.40 00:13:42.924 clat (msec): min=25, max=366, avg=42.56, stdev=20.97 00:13:42.924 lat (msec): min=25, max=366, avg=42.60, stdev=20.97 00:13:42.924 clat percentiles (msec): 00:13:42.924 | 1.00th=[ 28], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 37], 00:13:42.924 | 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 39], 00:13:42.924 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 44], 95.00th=[ 74], 00:13:42.924 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 368], 99.95th=[ 368], 00:13:42.924 | 99.99th=[ 368] 00:13:42.924 bw ( KiB/s): min=12544, max=21716, per=4.04%, avg=15996.80, stdev=2953.65, samples=10 00:13:42.924 iops : min= 98, max= 169, avg=124.80, stdev=22.96, samples=10 00:13:42.924 write: IOPS=104, BW=13.1MiB/s (13.7MB/s)(70.1MiB/5351msec); 0 zone resets 00:13:42.924 slat (usec): min=13, max=1962, avg=52.65, stdev=102.33 00:13:42.924 clat (msec): min=144, max=903, avg=562.06, stdev=81.91 00:13:42.924 lat (msec): min=144, max=903, avg=562.11, stdev=81.92 00:13:42.924 clat percentiles (msec): 00:13:42.924 | 1.00th=[ 253], 5.00th=[ 418], 10.00th=[ 518], 20.00th=[ 542], 00:13:42.924 | 30.00th=[ 558], 40.00th=[ 558], 50.00th=[ 567], 60.00th=[ 575], 00:13:42.924 | 70.00th=[ 575], 80.00th=[ 584], 90.00th=[ 617], 95.00th=[ 659], 00:13:42.924 | 99.00th=[ 869], 99.50th=[ 885], 99.90th=[ 902], 99.95th=[ 902], 00:13:42.924 | 99.99th=[ 902] 00:13:42.924 bw ( KiB/s): min= 6925, max=13824, per=3.19%, avg=12744.50, stdev=2065.54, samples=10 00:13:42.924 iops : min= 54, max= 108, avg=99.40, stdev=16.11, samples=10 00:13:42.924 lat (msec) : 50=48.53%, 100=2.78%, 250=1.77%, 500=3.54%, 750=42.04% 00:13:42.924 lat (msec) : 1000=1.35% 00:13:42.924 cpu : usr=0.36%, sys=0.60%, ctx=959, majf=0, minf=1 00:13:42.924 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:13:42.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.924 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.924 issued rwts: total=626,561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.924 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.924 job23: (groupid=0, jobs=1): err= 0: pid=70921: Thu Jul 25 17:04:34 2024 00:13:42.924 read: IOPS=98, BW=12.3MiB/s (12.9MB/s)(66.2MiB/5378msec) 00:13:42.924 slat (usec): min=7, max=324, avg=34.50, stdev=36.45 00:13:42.924 clat (usec): min=1117, max=390260, avg=42080.90, stdev=31301.66 00:13:42.924 lat (usec): min=1131, max=390289, avg=42115.40, stdev=31301.63 00:13:42.924 clat percentiles (usec): 00:13:42.924 | 1.00th=[ 1565], 5.00th=[ 20055], 10.00th=[ 30016], 20.00th=[ 36439], 00:13:42.924 | 30.00th=[ 36963], 40.00th=[ 37487], 50.00th=[ 38011], 60.00th=[ 38536], 00:13:42.924 | 70.00th=[ 39060], 80.00th=[ 40109], 90.00th=[ 41681], 95.00th=[ 84411], 00:13:42.924 | 99.00th=[162530], 99.50th=[181404], 99.90th=[392168], 99.95th=[392168], 00:13:42.924 | 99.99th=[392168] 00:13:42.924 bw ( KiB/s): min= 9728, max=23040, per=3.41%, avg=13511.90, stdev=3834.94, samples=10 00:13:42.924 iops : min= 76, max= 180, avg=105.40, stdev=30.04, samples=10 00:13:42.924 write: IOPS=106, BW=13.3MiB/s (13.9MB/s)(71.4MiB/5378msec); 0 zone resets 00:13:42.924 slat (usec): min=10, max=436, avg=43.01, stdev=44.76 00:13:42.924 clat (msec): min=2, max=931, avg=562.81, stdev=119.82 00:13:42.924 lat (msec): min=2, max=931, avg=562.86, stdev=119.83 00:13:42.924 clat percentiles (msec): 00:13:42.924 | 1.00th=[ 4], 5.00th=[ 393], 10.00th=[ 523], 20.00th=[ 550], 00:13:42.924 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 584], 00:13:42.924 | 70.00th=[ 592], 80.00th=[ 600], 90.00th=[ 625], 95.00th=[ 684], 00:13:42.924 | 99.00th=[ 860], 99.50th=[ 894], 99.90th=[ 936], 99.95th=[ 936], 00:13:42.924 | 99.99th=[ 936] 00:13:42.924 bw ( KiB/s): min= 9984, max=13824, per=3.27%, avg=13050.50, stdev=1117.08, samples=10 00:13:42.924 iops : min= 78, max= 108, avg=101.80, stdev= 8.68, samples=10 00:13:42.924 lat (msec) : 2=0.91%, 4=0.64%, 10=1.45%, 20=0.45%, 50=42.69% 00:13:42.924 lat (msec) : 100=1.54%, 250=2.00%, 500=3.27%, 750=44.96%, 1000=2.09% 00:13:42.924 cpu : usr=0.32%, sys=0.56%, ctx=712, majf=0, minf=1 00:13:42.924 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.3% 00:13:42.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.924 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.924 issued rwts: total=530,571,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.924 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.924 job24: (groupid=0, jobs=1): err= 0: pid=70922: Thu Jul 25 17:04:34 2024 00:13:42.924 read: IOPS=109, BW=13.6MiB/s (14.3MB/s)(73.1MiB/5364msec) 00:13:42.924 slat (usec): min=8, max=3493, avg=45.53, stdev=148.28 00:13:42.924 clat (msec): min=7, max=383, avg=43.03, stdev=28.03 00:13:42.924 lat (msec): min=7, max=383, avg=43.08, stdev=28.02 00:13:42.924 clat percentiles (msec): 00:13:42.924 | 1.00th=[ 19], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 37], 00:13:42.924 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:13:42.924 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 43], 95.00th=[ 68], 00:13:42.924 | 99.00th=[ 171], 99.50th=[ 186], 99.90th=[ 384], 99.95th=[ 384], 00:13:42.924 | 99.99th=[ 384] 00:13:42.924 bw ( KiB/s): min=12288, max=20521, per=3.76%, avg=14903.40, stdev=2175.21, samples=10 00:13:42.924 iops : min= 96, max= 160, avg=116.30, stdev=16.93, samples=10 00:13:42.924 write: IOPS=104, BW=13.0MiB/s (13.6MB/s)(69.8MiB/5364msec); 0 zone resets 00:13:42.924 slat (usec): min=12, max=281, avg=47.79, stdev=29.50 00:13:42.924 clat (msec): min=114, max=914, avg=568.84, stdev=83.05 00:13:42.924 lat (msec): min=114, max=914, avg=568.88, stdev=83.06 00:13:42.924 clat percentiles (msec): 00:13:42.924 | 1.00th=[ 257], 5.00th=[ 422], 10.00th=[ 531], 20.00th=[ 550], 00:13:42.924 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 567], 60.00th=[ 575], 00:13:42.924 | 70.00th=[ 584], 80.00th=[ 592], 90.00th=[ 609], 95.00th=[ 693], 00:13:42.924 | 99.00th=[ 860], 99.50th=[ 894], 99.90th=[ 919], 99.95th=[ 919], 00:13:42.924 | 99.99th=[ 919] 00:13:42.924 bw ( KiB/s): min= 6669, max=13824, per=3.19%, avg=12724.40, stdev=2148.12, samples=10 00:13:42.924 iops : min= 52, max= 108, avg=99.30, stdev=16.77, samples=10 00:13:42.924 lat (msec) : 10=0.17%, 20=0.52%, 50=46.63%, 100=2.36%, 250=1.75% 00:13:42.924 lat (msec) : 500=3.59%, 750=43.13%, 1000=1.84% 00:13:42.924 cpu : usr=0.26%, sys=0.84%, ctx=659, majf=0, minf=1 00:13:42.924 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:13:42.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.924 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.924 issued rwts: total=585,558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.924 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.924 job25: (groupid=0, jobs=1): err= 0: pid=70923: Thu Jul 25 17:04:34 2024 00:13:42.924 read: IOPS=105, BW=13.2MiB/s (13.8MB/s)(70.8MiB/5357msec) 00:13:42.924 slat (usec): min=9, max=3230, avg=38.57, stdev=135.81 00:13:42.924 clat (msec): min=24, max=386, avg=46.09, stdev=34.62 00:13:42.924 lat (msec): min=24, max=386, avg=46.13, stdev=34.63 00:13:42.924 clat percentiles (msec): 00:13:42.924 | 1.00th=[ 29], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 37], 00:13:42.924 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:13:42.924 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 52], 95.00th=[ 104], 00:13:42.924 | 99.00th=[ 165], 99.50th=[ 368], 99.90th=[ 388], 99.95th=[ 388], 00:13:42.924 | 99.99th=[ 388] 00:13:42.924 bw ( KiB/s): min=10752, max=20008, per=3.63%, avg=14365.20, stdev=2757.79, samples=10 00:13:42.924 iops : min= 84, max= 156, avg=112.10, stdev=21.42, samples=10 00:13:42.924 write: IOPS=103, BW=13.0MiB/s (13.6MB/s)(69.5MiB/5357msec); 0 zone resets 00:13:42.924 slat (usec): min=16, max=146, avg=38.74, stdev=15.84 00:13:42.924 clat (msec): min=50, max=918, avg=568.52, stdev=81.63 00:13:42.924 lat (msec): min=50, max=918, avg=568.56, stdev=81.63 00:13:42.924 clat percentiles (msec): 00:13:42.924 | 1.00th=[ 255], 5.00th=[ 443], 10.00th=[ 535], 20.00th=[ 550], 00:13:42.924 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 575], 00:13:42.924 | 70.00th=[ 584], 80.00th=[ 592], 90.00th=[ 600], 95.00th=[ 684], 00:13:42.924 | 99.00th=[ 869], 99.50th=[ 885], 99.90th=[ 919], 99.95th=[ 919], 00:13:42.924 | 99.99th=[ 919] 00:13:42.924 bw ( KiB/s): min= 6669, max=13824, per=3.19%, avg=12724.40, stdev=2151.13, samples=10 00:13:42.924 iops : min= 52, max= 108, avg=99.30, stdev=16.79, samples=10 00:13:42.924 lat (msec) : 50=45.28%, 100=2.67%, 250=2.58%, 500=3.30%, 750=44.65% 00:13:42.924 lat (msec) : 1000=1.52% 00:13:42.924 cpu : usr=0.26%, sys=0.71%, ctx=710, majf=0, minf=1 00:13:42.924 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.4% 00:13:42.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.924 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.924 issued rwts: total=566,556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.924 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.924 job26: (groupid=0, jobs=1): err= 0: pid=70924: Thu Jul 25 17:04:34 2024 00:13:42.925 read: IOPS=96, BW=12.1MiB/s (12.7MB/s)(64.8MiB/5348msec) 00:13:42.925 slat (usec): min=7, max=201, avg=29.18, stdev=14.70 00:13:42.925 clat (msec): min=27, max=377, avg=46.23, stdev=32.50 00:13:42.925 lat (msec): min=27, max=377, avg=46.26, stdev=32.50 00:13:42.925 clat percentiles (msec): 00:13:42.925 | 1.00th=[ 29], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 37], 00:13:42.925 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:13:42.925 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 58], 95.00th=[ 103], 00:13:42.925 | 99.00th=[ 163], 99.50th=[ 359], 99.90th=[ 380], 99.95th=[ 380], 00:13:42.925 | 99.99th=[ 380] 00:13:42.925 bw ( KiB/s): min=10752, max=19751, per=3.32%, avg=13156.40, stdev=2613.33, samples=10 00:13:42.925 iops : min= 84, max= 154, avg=102.60, stdev=20.25, samples=10 00:13:42.925 write: IOPS=104, BW=13.0MiB/s (13.7MB/s)(69.8MiB/5348msec); 0 zone resets 00:13:42.925 slat (usec): min=13, max=463, avg=35.41, stdev=22.69 00:13:42.925 clat (msec): min=152, max=890, avg=569.69, stdev=80.98 00:13:42.925 lat (msec): min=152, max=890, avg=569.72, stdev=80.99 00:13:42.925 clat percentiles (msec): 00:13:42.925 | 1.00th=[ 253], 5.00th=[ 418], 10.00th=[ 531], 20.00th=[ 550], 00:13:42.925 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 575], 00:13:42.925 | 70.00th=[ 584], 80.00th=[ 600], 90.00th=[ 617], 95.00th=[ 667], 00:13:42.925 | 99.00th=[ 852], 99.50th=[ 877], 99.90th=[ 894], 99.95th=[ 894], 00:13:42.925 | 99.99th=[ 894] 00:13:42.925 bw ( KiB/s): min= 6925, max=13824, per=3.19%, avg=12744.50, stdev=2065.54, samples=10 00:13:42.925 iops : min= 54, max= 108, avg=99.40, stdev=16.11, samples=10 00:13:42.925 lat (msec) : 50=42.75%, 100=2.88%, 250=2.70%, 500=3.62%, 750=46.47% 00:13:42.925 lat (msec) : 1000=1.58% 00:13:42.925 cpu : usr=0.21%, sys=0.64%, ctx=660, majf=0, minf=1 00:13:42.925 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.1% 00:13:42.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.925 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.925 issued rwts: total=518,558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.925 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.925 job27: (groupid=0, jobs=1): err= 0: pid=70925: Thu Jul 25 17:04:34 2024 00:13:42.925 read: IOPS=98, BW=12.3MiB/s (12.9MB/s)(65.9MiB/5355msec) 00:13:42.925 slat (nsec): min=6438, max=89835, avg=27766.09, stdev=11808.78 00:13:42.925 clat (msec): min=28, max=150, avg=46.23, stdev=22.77 00:13:42.925 lat (msec): min=28, max=150, avg=46.26, stdev=22.76 00:13:42.925 clat percentiles (msec): 00:13:42.925 | 1.00th=[ 30], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:13:42.925 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:13:42.925 | 70.00th=[ 40], 80.00th=[ 42], 90.00th=[ 75], 95.00th=[ 101], 00:13:42.925 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 150], 99.95th=[ 150], 00:13:42.925 | 99.99th=[ 150] 00:13:42.925 bw ( KiB/s): min= 9216, max=24576, per=3.40%, avg=13486.40, stdev=4346.42, samples=10 00:13:42.925 iops : min= 72, max= 192, avg=105.20, stdev=34.03, samples=10 00:13:42.925 write: IOPS=104, BW=13.0MiB/s (13.7MB/s)(69.9MiB/5355msec); 0 zone resets 00:13:42.925 slat (usec): min=12, max=7957, avg=48.31, stdev=335.33 00:13:42.925 clat (msec): min=154, max=924, avg=567.84, stdev=83.01 00:13:42.925 lat (msec): min=161, max=924, avg=567.89, stdev=82.94 00:13:42.925 clat percentiles (msec): 00:13:42.925 | 1.00th=[ 259], 5.00th=[ 414], 10.00th=[ 506], 20.00th=[ 550], 00:13:42.925 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 575], 60.00th=[ 584], 00:13:42.925 | 70.00th=[ 584], 80.00th=[ 592], 90.00th=[ 600], 95.00th=[ 667], 00:13:42.925 | 99.00th=[ 844], 99.50th=[ 902], 99.90th=[ 927], 99.95th=[ 927], 00:13:42.925 | 99.99th=[ 927] 00:13:42.925 bw ( KiB/s): min= 6656, max=13824, per=3.18%, avg=12692.00, stdev=2140.03, samples=10 00:13:42.925 iops : min= 52, max= 108, avg=99.00, stdev=16.65, samples=10 00:13:42.925 lat (msec) : 50=42.08%, 100=3.87%, 250=3.04%, 500=4.05%, 750=45.21% 00:13:42.925 lat (msec) : 1000=1.75% 00:13:42.925 cpu : usr=0.19%, sys=0.67%, ctx=626, majf=0, minf=1 00:13:42.925 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.2% 00:13:42.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.925 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.925 issued rwts: total=527,559,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.925 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.925 job28: (groupid=0, jobs=1): err= 0: pid=70926: Thu Jul 25 17:04:34 2024 00:13:42.925 read: IOPS=112, BW=14.0MiB/s (14.7MB/s)(75.2MiB/5372msec) 00:13:42.925 slat (usec): min=9, max=113, avg=29.75, stdev=11.93 00:13:42.925 clat (msec): min=4, max=381, avg=44.48, stdev=31.46 00:13:42.925 lat (msec): min=4, max=381, avg=44.51, stdev=31.46 00:13:42.925 clat percentiles (msec): 00:13:42.925 | 1.00th=[ 12], 5.00th=[ 30], 10.00th=[ 36], 20.00th=[ 37], 00:13:42.925 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:13:42.925 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 48], 95.00th=[ 77], 00:13:42.925 | 99.00th=[ 178], 99.50th=[ 188], 99.90th=[ 380], 99.95th=[ 380], 00:13:42.925 | 99.99th=[ 380] 00:13:42.925 bw ( KiB/s): min= 9984, max=27136, per=3.88%, avg=15354.00, stdev=4469.36, samples=10 00:13:42.925 iops : min= 78, max= 212, avg=119.80, stdev=34.94, samples=10 00:13:42.925 write: IOPS=104, BW=13.0MiB/s (13.6MB/s)(69.9MiB/5372msec); 0 zone resets 00:13:42.925 slat (usec): min=13, max=3738, avg=47.59, stdev=161.83 00:13:42.925 clat (msec): min=86, max=932, avg=565.87, stdev=84.20 00:13:42.925 lat (msec): min=89, max=932, avg=565.92, stdev=84.17 00:13:42.925 clat percentiles (msec): 00:13:42.925 | 1.00th=[ 245], 5.00th=[ 426], 10.00th=[ 523], 20.00th=[ 542], 00:13:42.925 | 30.00th=[ 550], 40.00th=[ 558], 50.00th=[ 567], 60.00th=[ 575], 00:13:42.925 | 70.00th=[ 584], 80.00th=[ 592], 90.00th=[ 600], 95.00th=[ 667], 00:13:42.925 | 99.00th=[ 885], 99.50th=[ 902], 99.90th=[ 936], 99.95th=[ 936], 00:13:42.925 | 99.99th=[ 936] 00:13:42.925 bw ( KiB/s): min= 6656, max=13824, per=3.19%, avg=12717.60, stdev=2149.78, samples=10 00:13:42.925 iops : min= 52, max= 108, avg=99.20, stdev=16.73, samples=10 00:13:42.925 lat (msec) : 10=0.43%, 20=0.78%, 50=45.74%, 100=3.01%, 250=2.24% 00:13:42.925 lat (msec) : 500=3.45%, 750=42.72%, 1000=1.64% 00:13:42.925 cpu : usr=0.28%, sys=0.65%, ctx=691, majf=0, minf=1 00:13:42.925 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.6% 00:13:42.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.925 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.925 issued rwts: total=602,559,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.925 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.925 job29: (groupid=0, jobs=1): err= 0: pid=70927: Thu Jul 25 17:04:34 2024 00:13:42.925 read: IOPS=106, BW=13.3MiB/s (14.0MB/s)(71.1MiB/5330msec) 00:13:42.925 slat (usec): min=5, max=144, avg=28.81, stdev=14.15 00:13:42.925 clat (msec): min=26, max=340, avg=47.68, stdev=29.17 00:13:42.925 lat (msec): min=26, max=340, avg=47.70, stdev=29.17 00:13:42.925 clat percentiles (msec): 00:13:42.925 | 1.00th=[ 29], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 37], 00:13:42.925 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:13:42.925 | 70.00th=[ 40], 80.00th=[ 42], 90.00th=[ 80], 95.00th=[ 113], 00:13:42.925 | 99.00th=[ 146], 99.50th=[ 153], 99.90th=[ 342], 99.95th=[ 342], 00:13:42.925 | 99.99th=[ 342] 00:13:42.925 bw ( KiB/s): min= 8192, max=26880, per=3.66%, avg=14492.70, stdev=5322.04, samples=10 00:13:42.925 iops : min= 64, max= 210, avg=113.20, stdev=41.57, samples=10 00:13:42.925 write: IOPS=105, BW=13.1MiB/s (13.8MB/s)(70.0MiB/5330msec); 0 zone resets 00:13:42.925 slat (usec): min=10, max=190, avg=34.50, stdev=15.79 00:13:42.925 clat (msec): min=146, max=891, avg=559.93, stdev=84.28 00:13:42.925 lat (msec): min=146, max=891, avg=559.97, stdev=84.28 00:13:42.925 clat percentiles (msec): 00:13:42.925 | 1.00th=[ 241], 5.00th=[ 401], 10.00th=[ 468], 20.00th=[ 550], 00:13:42.925 | 30.00th=[ 558], 40.00th=[ 567], 50.00th=[ 567], 60.00th=[ 575], 00:13:42.925 | 70.00th=[ 584], 80.00th=[ 584], 90.00th=[ 600], 95.00th=[ 634], 00:13:42.925 | 99.00th=[ 852], 99.50th=[ 869], 99.90th=[ 894], 99.95th=[ 894], 00:13:42.925 | 99.99th=[ 894] 00:13:42.925 bw ( KiB/s): min= 7168, max=13851, per=3.20%, avg=12777.10, stdev=1993.54, samples=10 00:13:42.925 iops : min= 56, max= 108, avg=99.80, stdev=15.56, samples=10 00:13:42.925 lat (msec) : 50=42.87%, 100=4.34%, 250=3.54%, 500=6.02%, 750=41.81% 00:13:42.925 lat (msec) : 1000=1.42% 00:13:42.925 cpu : usr=0.15%, sys=0.73%, ctx=659, majf=0, minf=1 00:13:42.925 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:13:42.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.925 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.925 issued rwts: total=569,560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.925 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.925 00:13:42.925 Run status group 0 (all jobs): 00:13:42.925 READ: bw=387MiB/s (406MB/s), 11.6MiB/s-14.6MiB/s (12.2MB/s-15.3MB/s), io=2080MiB (2181MB), run=5325-5378msec 00:13:42.925 WRITE: bw=390MiB/s (409MB/s), 13.0MiB/s-13.3MiB/s (13.6MB/s-14.0MB/s), io=2096MiB (2198MB), run=5325-5378msec 00:13:42.925 00:13:42.925 Disk stats (read/write): 00:13:42.925 sda: ios=569/517, merge=0/0, ticks=22467/290414, in_queue=312881, util=90.41% 00:13:42.925 sdb: ios=557/516, merge=0/0, ticks=22674/290439, in_queue=313114, util=90.53% 00:13:42.925 sdc: ios=614/516, merge=0/0, ticks=24598/287909, in_queue=312508, util=91.02% 00:13:42.925 sdd: ios=577/517, merge=0/0, ticks=23271/290326, in_queue=313597, util=91.48% 00:13:42.925 sde: ios=603/516, merge=0/0, ticks=25150/286467, in_queue=311617, util=89.87% 00:13:42.925 sdf: ios=608/517, merge=0/0, ticks=23964/289832, in_queue=313797, util=92.23% 00:13:42.925 sdg: ios=575/517, merge=0/0, ticks=23026/290423, in_queue=313449, util=91.82% 00:13:42.925 sdh: ios=498/517, merge=0/0, ticks=21762/290716, in_queue=312479, util=91.56% 00:13:42.925 sdi: ios=511/519, merge=0/0, ticks=23393/290676, in_queue=314069, util=92.22% 00:13:42.925 sdj: ios=570/518, merge=0/0, ticks=25417/288527, in_queue=313945, util=92.61% 00:13:42.925 sdk: ios=577/516, merge=0/0, ticks=25347/286253, in_queue=311600, util=92.38% 00:13:42.925 sdl: ios=566/517, merge=0/0, ticks=25523/288068, in_queue=313591, util=92.49% 00:13:42.925 sdm: ios=584/516, merge=0/0, ticks=26116/285794, in_queue=311910, util=91.65% 00:13:42.925 sdn: ios=562/516, merge=0/0, ticks=24978/286324, in_queue=311303, util=91.22% 00:13:42.925 sdo: ios=542/517, merge=0/0, ticks=23083/290674, in_queue=313757, util=93.64% 00:13:42.926 sdp: ios=533/515, merge=0/0, ticks=23793/288471, in_queue=312264, util=92.14% 00:13:42.926 sdq: ios=598/519, merge=0/0, ticks=25988/289016, in_queue=315005, util=94.35% 00:13:42.926 sdr: ios=525/516, merge=0/0, ticks=23512/289272, in_queue=312785, util=94.38% 00:13:42.926 sds: ios=559/516, merge=0/0, ticks=24616/286846, in_queue=311462, util=94.07% 00:13:42.926 sdt: ios=624/517, merge=0/0, ticks=27385/286846, in_queue=314232, util=94.74% 00:13:42.926 sdu: ios=564/536, merge=0/0, ticks=23043/292425, in_queue=315468, util=95.82% 00:13:42.926 sdv: ios=511/516, merge=0/0, ticks=22320/290283, in_queue=312603, util=95.33% 00:13:42.926 sdw: ios=626/517, merge=0/0, ticks=25978/286470, in_queue=312449, util=93.76% 00:13:42.926 sdx: ios=530/532, merge=0/0, ticks=21456/293395, in_queue=314851, util=96.28% 00:13:42.926 sdy: ios=585/518, merge=0/0, ticks=24409/289647, in_queue=314056, util=95.93% 00:13:42.926 sdz: ios=566/518, merge=0/0, ticks=24646/289341, in_queue=313988, util=96.29% 00:13:42.926 sdaa: ios=518/517, merge=0/0, ticks=22898/290173, in_queue=313071, util=96.28% 00:13:42.926 sdab: ios=527/516, merge=0/0, ticks=24305/288633, in_queue=312938, util=96.23% 00:13:42.926 sdac: ios=602/518, merge=0/0, ticks=25958/287928, in_queue=313886, util=96.69% 00:13:42.926 sdad: ios=569/516, merge=0/0, ticks=26452/285128, in_queue=311580, util=96.58% 00:13:42.926 [2024-07-25 17:04:34.550461] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.926 [2024-07-25 17:04:34.552308] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.926 [2024-07-25 17:04:34.553975] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.926 [2024-07-25 17:04:34.555614] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.926 [2024-07-25 17:04:34.557275] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.926 17:04:34 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 262144 -d 16 -t randwrite -r 10 00:13:42.926 [global] 00:13:42.926 thread=1 00:13:42.926 invalidate=1 00:13:42.926 rw=randwrite 00:13:42.926 time_based=1 00:13:42.926 runtime=10 00:13:42.926 ioengine=libaio 00:13:42.926 direct=1 00:13:42.926 bs=262144 00:13:42.926 iodepth=16 00:13:42.926 norandommap=1 00:13:42.926 numjobs=1 00:13:42.926 00:13:42.926 [job0] 00:13:42.926 filename=/dev/sda 00:13:42.926 [job1] 00:13:42.926 filename=/dev/sdb 00:13:42.926 [job2] 00:13:42.926 filename=/dev/sdc 00:13:42.926 [job3] 00:13:42.926 filename=/dev/sdd 00:13:42.926 [job4] 00:13:42.926 filename=/dev/sde 00:13:42.926 [job5] 00:13:42.926 filename=/dev/sdf 00:13:42.926 [job6] 00:13:42.926 filename=/dev/sdg 00:13:42.926 [job7] 00:13:42.926 filename=/dev/sdh 00:13:42.926 [job8] 00:13:42.926 filename=/dev/sdi 00:13:42.926 [job9] 00:13:42.926 filename=/dev/sdj 00:13:42.926 [job10] 00:13:42.926 filename=/dev/sdk 00:13:42.926 [job11] 00:13:42.926 filename=/dev/sdl 00:13:42.926 [job12] 00:13:42.926 filename=/dev/sdm 00:13:42.926 [job13] 00:13:42.926 filename=/dev/sdn 00:13:42.926 [job14] 00:13:42.926 filename=/dev/sdo 00:13:42.926 [job15] 00:13:42.926 filename=/dev/sdp 00:13:42.926 [job16] 00:13:42.926 filename=/dev/sdq 00:13:42.926 [job17] 00:13:42.926 filename=/dev/sdr 00:13:42.926 [job18] 00:13:42.926 filename=/dev/sds 00:13:42.926 [job19] 00:13:42.926 filename=/dev/sdt 00:13:42.926 [job20] 00:13:42.926 filename=/dev/sdu 00:13:42.926 [job21] 00:13:42.926 filename=/dev/sdv 00:13:42.926 [job22] 00:13:42.926 filename=/dev/sdw 00:13:42.926 [job23] 00:13:42.926 filename=/dev/sdx 00:13:42.926 [job24] 00:13:42.926 filename=/dev/sdy 00:13:42.926 [job25] 00:13:42.926 filename=/dev/sdz 00:13:42.926 [job26] 00:13:42.926 filename=/dev/sdaa 00:13:42.926 [job27] 00:13:42.926 filename=/dev/sdab 00:13:42.926 [job28] 00:13:42.926 filename=/dev/sdac 00:13:42.926 [job29] 00:13:42.926 filename=/dev/sdad 00:13:42.926 queue_depth set to 113 (sda) 00:13:42.926 queue_depth set to 113 (sdb) 00:13:42.926 queue_depth set to 113 (sdc) 00:13:42.926 queue_depth set to 113 (sdd) 00:13:42.926 queue_depth set to 113 (sde) 00:13:42.926 queue_depth set to 113 (sdf) 00:13:42.926 queue_depth set to 113 (sdg) 00:13:42.926 queue_depth set to 113 (sdh) 00:13:42.926 queue_depth set to 113 (sdi) 00:13:42.926 queue_depth set to 113 (sdj) 00:13:42.926 queue_depth set to 113 (sdk) 00:13:42.926 queue_depth set to 113 (sdl) 00:13:42.926 queue_depth set to 113 (sdm) 00:13:42.926 queue_depth set to 113 (sdn) 00:13:42.926 queue_depth set to 113 (sdo) 00:13:42.926 queue_depth set to 113 (sdp) 00:13:42.926 queue_depth set to 113 (sdq) 00:13:42.926 queue_depth set to 113 (sdr) 00:13:42.926 queue_depth set to 113 (sds) 00:13:42.926 queue_depth set to 113 (sdt) 00:13:42.926 queue_depth set to 113 (sdu) 00:13:42.926 queue_depth set to 113 (sdv) 00:13:42.926 queue_depth set to 113 (sdw) 00:13:42.926 queue_depth set to 113 (sdx) 00:13:42.926 queue_depth set to 113 (sdy) 00:13:42.926 queue_depth set to 113 (sdz) 00:13:42.926 queue_depth set to 113 (sdaa) 00:13:42.926 queue_depth set to 113 (sdab) 00:13:42.926 queue_depth set to 113 (sdac) 00:13:42.926 queue_depth set to 113 (sdad) 00:13:43.185 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job11: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job12: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job13: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job14: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job15: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job16: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job17: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job18: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job19: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job20: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job21: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job22: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job23: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job24: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job25: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.185 job26: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.186 job27: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.186 job28: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.186 job29: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:13:43.186 fio-3.35 00:13:43.186 Starting 30 threads 00:13:43.186 [2024-07-25 17:04:35.513877] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.519469] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.524439] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.529027] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.532720] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.536590] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.539672] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.542878] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.545649] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.549245] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.551493] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.553923] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.556294] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.558119] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.561067] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.562960] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.564997] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.567296] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.569504] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.571705] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.573908] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.576487] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.578516] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.580551] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.582428] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.584606] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.586456] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.588380] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.590552] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.186 [2024-07-25 17:04:35.592413] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.224433] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.240509] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.247706] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.250754] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.252883] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.255278] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.257496] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.259630] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.261653] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.263822] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.265943] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.268112] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.270352] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.272505] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.274679] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.277002] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.279276] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.281565] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.283969] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.286064] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.291957] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.397 [2024-07-25 17:04:46.294309] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.398 [2024-07-25 17:04:46.296509] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.398 [2024-07-25 17:04:46.298861] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.398 [2024-07-25 17:04:46.301194] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.398 [2024-07-25 17:04:46.303467] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.398 [2024-07-25 17:04:46.305788] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.398 [2024-07-25 17:04:46.308055] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.398 [2024-07-25 17:04:46.311519] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.398 00:13:55.398 job0: (groupid=0, jobs=1): err= 0: pid=71428: Thu Jul 25 17:04:46 2024 00:13:55.398 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10144msec); 0 zone resets 00:13:55.398 slat (usec): min=24, max=436, avg=58.30, stdev=29.86 00:13:55.398 clat (msec): min=15, max=293, avg=165.56, stdev=14.54 00:13:55.398 lat (msec): min=15, max=293, avg=165.62, stdev=14.54 00:13:55.398 clat percentiles (msec): 00:13:55.398 | 1.00th=[ 108], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.398 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.398 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.398 | 99.00th=[ 203], 99.50th=[ 253], 99.90th=[ 292], 99.95th=[ 292], 00:13:55.398 | 99.99th=[ 292] 00:13:55.398 bw ( KiB/s): min=23552, max=25088, per=3.33%, avg=24673.40, stdev=459.26, samples=20 00:13:55.398 iops : min= 92, max= 98, avg=96.30, stdev= 1.84, samples=20 00:13:55.398 lat (msec) : 20=0.10%, 50=0.31%, 100=0.51%, 250=98.57%, 500=0.51% 00:13:55.398 cpu : usr=0.22%, sys=0.42%, ctx=993, majf=0, minf=1 00:13:55.398 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.398 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.398 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.398 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.398 job1: (groupid=0, jobs=1): err= 0: pid=71429: Thu Jul 25 17:04:46 2024 00:13:55.398 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10143msec); 0 zone resets 00:13:55.398 slat (usec): min=28, max=207, avg=62.87, stdev=16.78 00:13:55.398 clat (msec): min=18, max=290, avg=165.54, stdev=14.18 00:13:55.398 lat (msec): min=18, max=290, avg=165.61, stdev=14.18 00:13:55.398 clat percentiles (msec): 00:13:55.398 | 1.00th=[ 110], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.398 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.398 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.398 | 99.00th=[ 199], 99.50th=[ 251], 99.90th=[ 292], 99.95th=[ 292], 00:13:55.398 | 99.99th=[ 292] 00:13:55.398 bw ( KiB/s): min=23552, max=25088, per=3.33%, avg=24678.30, stdev=455.41, samples=20 00:13:55.398 iops : min= 92, max= 98, avg=96.35, stdev= 1.81, samples=20 00:13:55.398 lat (msec) : 20=0.10%, 50=0.31%, 100=0.51%, 250=98.67%, 500=0.41% 00:13:55.398 cpu : usr=0.30%, sys=0.56%, ctx=994, majf=0, minf=1 00:13:55.398 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.398 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.398 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.398 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.398 job2: (groupid=0, jobs=1): err= 0: pid=71432: Thu Jul 25 17:04:46 2024 00:13:55.398 write: IOPS=96, BW=24.2MiB/s (25.4MB/s)(246MiB/10155msec); 0 zone resets 00:13:55.398 slat (usec): min=25, max=178, avg=53.38, stdev=15.04 00:13:55.398 clat (msec): min=3, max=297, avg=165.07, stdev=17.43 00:13:55.398 lat (msec): min=3, max=297, avg=165.12, stdev=17.43 00:13:55.398 clat percentiles (msec): 00:13:55.398 | 1.00th=[ 74], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.398 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.398 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.398 | 99.00th=[ 205], 99.50th=[ 257], 99.90th=[ 296], 99.95th=[ 296], 00:13:55.398 | 99.99th=[ 296] 00:13:55.398 bw ( KiB/s): min=23552, max=26164, per=3.34%, avg=24780.90, stdev=568.52, samples=20 00:13:55.398 iops : min= 92, max= 102, avg=96.75, stdev= 2.22, samples=20 00:13:55.398 lat (msec) : 4=0.10%, 10=0.20%, 20=0.10%, 50=0.31%, 100=0.51% 00:13:55.398 lat (msec) : 250=98.27%, 500=0.51% 00:13:55.398 cpu : usr=0.27%, sys=0.36%, ctx=987, majf=0, minf=1 00:13:55.398 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.398 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.398 issued rwts: total=0,983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.398 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.398 job3: (groupid=0, jobs=1): err= 0: pid=71436: Thu Jul 25 17:04:46 2024 00:13:55.398 write: IOPS=96, BW=24.2MiB/s (25.4MB/s)(246MiB/10153msec); 0 zone resets 00:13:55.398 slat (usec): min=30, max=349, avg=68.14, stdev=19.96 00:13:55.398 clat (msec): min=4, max=297, avg=165.02, stdev=17.50 00:13:55.398 lat (msec): min=4, max=297, avg=165.09, stdev=17.50 00:13:55.398 clat percentiles (msec): 00:13:55.398 | 1.00th=[ 72], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.398 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.398 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.398 | 99.00th=[ 205], 99.50th=[ 257], 99.90th=[ 296], 99.95th=[ 296], 00:13:55.398 | 99.99th=[ 296] 00:13:55.398 bw ( KiB/s): min=23552, max=26164, per=3.34%, avg=24780.90, stdev=568.52, samples=20 00:13:55.398 iops : min= 92, max= 102, avg=96.75, stdev= 2.22, samples=20 00:13:55.398 lat (msec) : 10=0.31%, 20=0.20%, 50=0.20%, 100=0.51%, 250=98.27% 00:13:55.398 lat (msec) : 500=0.51% 00:13:55.398 cpu : usr=0.33%, sys=0.59%, ctx=992, majf=0, minf=1 00:13:55.398 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.398 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.398 issued rwts: total=0,983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.398 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.398 job4: (groupid=0, jobs=1): err= 0: pid=71438: Thu Jul 25 17:04:46 2024 00:13:55.398 write: IOPS=98, BW=24.5MiB/s (25.7MB/s)(249MiB/10154msec); 0 zone resets 00:13:55.398 slat (usec): min=27, max=2814, avg=68.96, stdev=88.60 00:13:55.398 clat (usec): min=669, max=303013, avg=162825.80, stdev=25428.89 00:13:55.398 lat (msec): min=3, max=303, avg=162.89, stdev=25.41 00:13:55.398 clat percentiles (msec): 00:13:55.398 | 1.00th=[ 15], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.398 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.398 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.398 | 99.00th=[ 211], 99.50th=[ 264], 99.90th=[ 305], 99.95th=[ 305], 00:13:55.398 | 99.99th=[ 305] 00:13:55.398 bw ( KiB/s): min=23552, max=32768, per=3.38%, avg=25108.55, stdev=1854.18, samples=20 00:13:55.398 iops : min= 92, max= 128, avg=98.00, stdev= 7.26, samples=20 00:13:55.398 lat (usec) : 750=0.10% 00:13:55.398 lat (msec) : 4=0.10%, 10=0.50%, 20=0.80%, 50=0.70%, 100=0.60% 00:13:55.398 lat (msec) : 250=96.59%, 500=0.60% 00:13:55.398 cpu : usr=0.42%, sys=0.50%, ctx=994, majf=0, minf=1 00:13:55.398 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.398 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.398 issued rwts: total=0,996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.398 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.398 job5: (groupid=0, jobs=1): err= 0: pid=71439: Thu Jul 25 17:04:46 2024 00:13:55.398 write: IOPS=97, BW=24.5MiB/s (25.6MB/s)(248MiB/10153msec); 0 zone resets 00:13:55.398 slat (usec): min=24, max=249, avg=61.51, stdev=18.11 00:13:55.398 clat (msec): min=8, max=303, avg=163.36, stdev=23.65 00:13:55.398 lat (msec): min=8, max=303, avg=163.42, stdev=23.66 00:13:55.398 clat percentiles (msec): 00:13:55.398 | 1.00th=[ 19], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.398 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.398 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.398 | 99.00th=[ 211], 99.50th=[ 264], 99.90th=[ 305], 99.95th=[ 305], 00:13:55.398 | 99.99th=[ 305] 00:13:55.398 bw ( KiB/s): min=23552, max=31232, per=3.37%, avg=25031.75, stdev=1522.35, samples=20 00:13:55.398 iops : min= 92, max= 122, avg=97.70, stdev= 5.97, samples=20 00:13:55.398 lat (msec) : 10=0.40%, 20=0.60%, 50=0.91%, 100=0.60%, 250=96.88% 00:13:55.398 lat (msec) : 500=0.60% 00:13:55.398 cpu : usr=0.29%, sys=0.40%, ctx=1000, majf=0, minf=1 00:13:55.398 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.398 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.398 issued rwts: total=0,993,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.398 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.398 job6: (groupid=0, jobs=1): err= 0: pid=71440: Thu Jul 25 17:04:46 2024 00:13:55.398 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10143msec); 0 zone resets 00:13:55.398 slat (usec): min=20, max=637, avg=53.36, stdev=26.03 00:13:55.398 clat (msec): min=13, max=295, avg=165.56, stdev=14.88 00:13:55.398 lat (msec): min=13, max=295, avg=165.62, stdev=14.89 00:13:55.398 clat percentiles (msec): 00:13:55.398 | 1.00th=[ 106], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.398 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.398 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.398 | 99.00th=[ 205], 99.50th=[ 255], 99.90th=[ 296], 99.95th=[ 296], 00:13:55.398 | 99.99th=[ 296] 00:13:55.398 bw ( KiB/s): min=23552, max=25600, per=3.33%, avg=24678.40, stdev=457.95, samples=20 00:13:55.398 iops : min= 92, max= 100, avg=96.40, stdev= 1.79, samples=20 00:13:55.398 lat (msec) : 20=0.10%, 50=0.31%, 100=0.51%, 250=98.57%, 500=0.51% 00:13:55.398 cpu : usr=0.29%, sys=0.37%, ctx=1012, majf=0, minf=1 00:13:55.399 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.399 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.399 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.399 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.399 job7: (groupid=0, jobs=1): err= 0: pid=71471: Thu Jul 25 17:04:46 2024 00:13:55.399 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10139msec); 0 zone resets 00:13:55.399 slat (usec): min=23, max=320, avg=62.10, stdev=19.21 00:13:55.399 clat (msec): min=18, max=285, avg=165.49, stdev=13.82 00:13:55.399 lat (msec): min=19, max=285, avg=165.55, stdev=13.82 00:13:55.399 clat percentiles (msec): 00:13:55.399 | 1.00th=[ 111], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.399 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 167], 00:13:55.399 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.399 | 99.00th=[ 194], 99.50th=[ 245], 99.90th=[ 288], 99.95th=[ 288], 00:13:55.399 | 99.99th=[ 288] 00:13:55.399 bw ( KiB/s): min=23599, max=25088, per=3.33%, avg=24678.20, stdev=449.66, samples=20 00:13:55.399 iops : min= 92, max= 98, avg=96.35, stdev= 1.76, samples=20 00:13:55.399 lat (msec) : 20=0.10%, 50=0.31%, 100=0.41%, 250=98.77%, 500=0.41% 00:13:55.399 cpu : usr=0.35%, sys=0.48%, ctx=987, majf=0, minf=1 00:13:55.399 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.399 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.399 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.399 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.399 job8: (groupid=0, jobs=1): err= 0: pid=71472: Thu Jul 25 17:04:46 2024 00:13:55.399 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10140msec); 0 zone resets 00:13:55.399 slat (usec): min=25, max=319, avg=67.36, stdev=35.64 00:13:55.399 clat (msec): min=18, max=285, avg=165.50, stdev=13.91 00:13:55.399 lat (msec): min=18, max=285, avg=165.57, stdev=13.91 00:13:55.399 clat percentiles (msec): 00:13:55.399 | 1.00th=[ 111], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.399 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 167], 00:13:55.399 | 70.00th=[ 167], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.399 | 99.00th=[ 194], 99.50th=[ 247], 99.90th=[ 288], 99.95th=[ 288], 00:13:55.399 | 99.99th=[ 288] 00:13:55.399 bw ( KiB/s): min=23504, max=25088, per=3.33%, avg=24675.85, stdev=458.78, samples=20 00:13:55.399 iops : min= 91, max= 98, avg=96.30, stdev= 1.89, samples=20 00:13:55.399 lat (msec) : 20=0.10%, 50=0.31%, 100=0.41%, 250=98.77%, 500=0.41% 00:13:55.399 cpu : usr=0.26%, sys=0.43%, ctx=1036, majf=0, minf=1 00:13:55.399 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.399 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.399 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.399 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.399 job9: (groupid=0, jobs=1): err= 0: pid=71473: Thu Jul 25 17:04:46 2024 00:13:55.399 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10141msec); 0 zone resets 00:13:55.399 slat (usec): min=29, max=403, avg=71.28, stdev=33.21 00:13:55.399 clat (msec): min=18, max=285, avg=165.50, stdev=13.90 00:13:55.399 lat (msec): min=19, max=285, avg=165.57, stdev=13.91 00:13:55.399 clat percentiles (msec): 00:13:55.399 | 1.00th=[ 111], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.399 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 167], 00:13:55.399 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.399 | 99.00th=[ 194], 99.50th=[ 247], 99.90th=[ 288], 99.95th=[ 288], 00:13:55.399 | 99.99th=[ 288] 00:13:55.399 bw ( KiB/s): min=23552, max=25600, per=3.33%, avg=24675.80, stdev=485.34, samples=20 00:13:55.399 iops : min= 92, max= 100, avg=96.30, stdev= 1.95, samples=20 00:13:55.399 lat (msec) : 20=0.10%, 50=0.31%, 100=0.41%, 250=98.77%, 500=0.41% 00:13:55.399 cpu : usr=0.38%, sys=0.49%, ctx=1011, majf=0, minf=1 00:13:55.399 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.399 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.399 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.399 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.399 job10: (groupid=0, jobs=1): err= 0: pid=71474: Thu Jul 25 17:04:46 2024 00:13:55.399 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10143msec); 0 zone resets 00:13:55.399 slat (usec): min=21, max=307, avg=63.79, stdev=23.66 00:13:55.399 clat (msec): min=15, max=293, avg=165.55, stdev=14.52 00:13:55.399 lat (msec): min=15, max=293, avg=165.61, stdev=14.53 00:13:55.399 clat percentiles (msec): 00:13:55.399 | 1.00th=[ 108], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.399 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.399 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.399 | 99.00th=[ 203], 99.50th=[ 253], 99.90th=[ 292], 99.95th=[ 292], 00:13:55.399 | 99.99th=[ 292] 00:13:55.399 bw ( KiB/s): min=23552, max=25088, per=3.33%, avg=24675.85, stdev=455.68, samples=20 00:13:55.399 iops : min= 92, max= 98, avg=96.35, stdev= 1.76, samples=20 00:13:55.399 lat (msec) : 20=0.10%, 50=0.31%, 100=0.51%, 250=98.57%, 500=0.51% 00:13:55.399 cpu : usr=0.31%, sys=0.41%, ctx=995, majf=0, minf=1 00:13:55.399 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.399 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.399 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.399 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.399 job11: (groupid=0, jobs=1): err= 0: pid=71480: Thu Jul 25 17:04:46 2024 00:13:55.399 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10142msec); 0 zone resets 00:13:55.399 slat (usec): min=24, max=161, avg=56.66, stdev=14.37 00:13:55.399 clat (msec): min=17, max=290, avg=165.54, stdev=14.22 00:13:55.399 lat (msec): min=17, max=290, avg=165.60, stdev=14.22 00:13:55.399 clat percentiles (msec): 00:13:55.399 | 1.00th=[ 110], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.399 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.399 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.399 | 99.00th=[ 199], 99.50th=[ 251], 99.90th=[ 292], 99.95th=[ 292], 00:13:55.399 | 99.99th=[ 292] 00:13:55.399 bw ( KiB/s): min=23920, max=25088, per=3.33%, avg=24668.70, stdev=406.82, samples=20 00:13:55.399 iops : min= 93, max= 98, avg=96.30, stdev= 1.66, samples=20 00:13:55.399 lat (msec) : 20=0.10%, 50=0.31%, 100=0.41%, 250=98.77%, 500=0.41% 00:13:55.399 cpu : usr=0.32%, sys=0.43%, ctx=994, majf=0, minf=1 00:13:55.399 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.399 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.399 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.399 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.399 job12: (groupid=0, jobs=1): err= 0: pid=71485: Thu Jul 25 17:04:46 2024 00:13:55.399 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10144msec); 0 zone resets 00:13:55.399 slat (usec): min=30, max=257, avg=64.19, stdev=17.03 00:13:55.399 clat (msec): min=15, max=293, avg=165.56, stdev=14.58 00:13:55.399 lat (msec): min=15, max=293, avg=165.63, stdev=14.58 00:13:55.399 clat percentiles (msec): 00:13:55.399 | 1.00th=[ 108], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.399 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.399 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.399 | 99.00th=[ 203], 99.50th=[ 253], 99.90th=[ 296], 99.95th=[ 296], 00:13:55.399 | 99.99th=[ 296] 00:13:55.399 bw ( KiB/s): min=23552, max=25088, per=3.33%, avg=24678.40, stdev=514.69, samples=20 00:13:55.399 iops : min= 92, max= 98, avg=96.40, stdev= 2.01, samples=20 00:13:55.399 lat (msec) : 20=0.10%, 50=0.31%, 100=0.51%, 250=98.57%, 500=0.51% 00:13:55.399 cpu : usr=0.40%, sys=0.46%, ctx=985, majf=0, minf=1 00:13:55.399 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.399 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.399 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.399 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.399 job13: (groupid=0, jobs=1): err= 0: pid=71509: Thu Jul 25 17:04:46 2024 00:13:55.399 write: IOPS=96, BW=24.2MiB/s (25.4MB/s)(246MiB/10152msec); 0 zone resets 00:13:55.399 slat (usec): min=23, max=2052, avg=58.37, stdev=69.56 00:13:55.399 clat (msec): min=5, max=298, avg=165.19, stdev=16.97 00:13:55.399 lat (msec): min=5, max=298, avg=165.25, stdev=16.97 00:13:55.399 clat percentiles (msec): 00:13:55.399 | 1.00th=[ 81], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.399 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.399 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.399 | 99.00th=[ 207], 99.50th=[ 257], 99.90th=[ 300], 99.95th=[ 300], 00:13:55.399 | 99.99th=[ 300] 00:13:55.399 bw ( KiB/s): min=23552, max=25600, per=3.34%, avg=24752.65, stdev=476.11, samples=20 00:13:55.399 iops : min= 92, max= 100, avg=96.65, stdev= 1.84, samples=20 00:13:55.399 lat (msec) : 10=0.20%, 20=0.20%, 50=0.20%, 100=0.51%, 250=98.37% 00:13:55.399 lat (msec) : 500=0.51% 00:13:55.399 cpu : usr=0.21%, sys=0.46%, ctx=1032, majf=0, minf=1 00:13:55.399 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.399 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.399 issued rwts: total=0,982,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.399 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.399 job14: (groupid=0, jobs=1): err= 0: pid=71520: Thu Jul 25 17:04:46 2024 00:13:55.400 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10147msec); 0 zone resets 00:13:55.400 slat (usec): min=26, max=152, avg=54.38, stdev=15.05 00:13:55.400 clat (msec): min=9, max=291, avg=165.44, stdev=14.87 00:13:55.400 lat (msec): min=10, max=291, avg=165.50, stdev=14.87 00:13:55.400 clat percentiles (msec): 00:13:55.400 | 1.00th=[ 103], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.400 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.400 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.400 | 99.00th=[ 201], 99.50th=[ 251], 99.90th=[ 292], 99.95th=[ 292], 00:13:55.400 | 99.99th=[ 292] 00:13:55.400 bw ( KiB/s): min=23552, max=25600, per=3.33%, avg=24701.50, stdev=467.01, samples=20 00:13:55.400 iops : min= 92, max= 100, avg=96.45, stdev= 1.85, samples=20 00:13:55.400 lat (msec) : 10=0.10%, 50=0.31%, 100=0.51%, 250=98.57%, 500=0.51% 00:13:55.400 cpu : usr=0.36%, sys=0.29%, ctx=980, majf=0, minf=1 00:13:55.400 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.400 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.400 issued rwts: total=0,980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.400 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.400 job15: (groupid=0, jobs=1): err= 0: pid=71629: Thu Jul 25 17:04:46 2024 00:13:55.400 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10140msec); 0 zone resets 00:13:55.400 slat (usec): min=14, max=268, avg=49.96, stdev=20.49 00:13:55.400 clat (msec): min=19, max=285, avg=165.52, stdev=13.90 00:13:55.400 lat (msec): min=19, max=285, avg=165.57, stdev=13.90 00:13:55.400 clat percentiles (msec): 00:13:55.400 | 1.00th=[ 112], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.400 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 167], 00:13:55.400 | 70.00th=[ 167], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.400 | 99.00th=[ 194], 99.50th=[ 247], 99.90th=[ 288], 99.95th=[ 288], 00:13:55.400 | 99.99th=[ 288] 00:13:55.400 bw ( KiB/s): min=23504, max=25600, per=3.33%, avg=24675.85, stdev=487.92, samples=20 00:13:55.400 iops : min= 91, max= 100, avg=96.30, stdev= 2.00, samples=20 00:13:55.400 lat (msec) : 20=0.10%, 50=0.31%, 100=0.41%, 250=98.77%, 500=0.41% 00:13:55.400 cpu : usr=0.29%, sys=0.32%, ctx=1017, majf=0, minf=1 00:13:55.400 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.400 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.400 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.400 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.400 job16: (groupid=0, jobs=1): err= 0: pid=71630: Thu Jul 25 17:04:46 2024 00:13:55.400 write: IOPS=96, BW=24.2MiB/s (25.4MB/s)(246MiB/10153msec); 0 zone resets 00:13:55.400 slat (usec): min=27, max=166, avg=63.08, stdev=15.46 00:13:55.400 clat (msec): min=6, max=296, avg=165.20, stdev=16.57 00:13:55.400 lat (msec): min=6, max=296, avg=165.26, stdev=16.57 00:13:55.400 clat percentiles (msec): 00:13:55.400 | 1.00th=[ 84], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.400 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.400 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.400 | 99.00th=[ 205], 99.50th=[ 255], 99.90th=[ 296], 99.95th=[ 296], 00:13:55.400 | 99.99th=[ 296] 00:13:55.400 bw ( KiB/s): min=23552, max=25651, per=3.34%, avg=24755.25, stdev=511.55, samples=20 00:13:55.400 iops : min= 92, max= 100, avg=96.65, stdev= 2.01, samples=20 00:13:55.400 lat (msec) : 10=0.10%, 20=0.20%, 50=0.31%, 100=0.51%, 250=98.37% 00:13:55.400 lat (msec) : 500=0.51% 00:13:55.400 cpu : usr=0.37%, sys=0.48%, ctx=985, majf=0, minf=1 00:13:55.400 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.400 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.400 issued rwts: total=0,982,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.400 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.400 job17: (groupid=0, jobs=1): err= 0: pid=71631: Thu Jul 25 17:04:46 2024 00:13:55.400 write: IOPS=96, BW=24.1MiB/s (25.2MB/s)(244MiB/10147msec); 0 zone resets 00:13:55.400 slat (usec): min=25, max=15049, avg=76.79, stdev=479.79 00:13:55.400 clat (msec): min=19, max=298, avg=165.69, stdev=14.58 00:13:55.400 lat (msec): min=29, max=298, avg=165.77, stdev=14.43 00:13:55.400 clat percentiles (msec): 00:13:55.400 | 1.00th=[ 112], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.400 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 167], 00:13:55.400 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.400 | 99.00th=[ 207], 99.50th=[ 257], 99.90th=[ 300], 99.95th=[ 300], 00:13:55.400 | 99.99th=[ 300] 00:13:55.400 bw ( KiB/s): min=23504, max=25600, per=3.32%, avg=24622.30, stdev=528.31, samples=20 00:13:55.400 iops : min= 91, max= 100, avg=96.10, stdev= 2.17, samples=20 00:13:55.400 lat (msec) : 20=0.10%, 50=0.31%, 100=0.41%, 250=98.67%, 500=0.51% 00:13:55.400 cpu : usr=0.25%, sys=0.47%, ctx=982, majf=0, minf=1 00:13:55.400 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.400 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.400 issued rwts: total=0,977,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.400 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.400 job18: (groupid=0, jobs=1): err= 0: pid=71632: Thu Jul 25 17:04:46 2024 00:13:55.400 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10144msec); 0 zone resets 00:13:55.400 slat (usec): min=27, max=199, avg=62.29, stdev=17.03 00:13:55.400 clat (msec): min=15, max=294, avg=165.57, stdev=14.58 00:13:55.400 lat (msec): min=15, max=294, avg=165.63, stdev=14.58 00:13:55.400 clat percentiles (msec): 00:13:55.400 | 1.00th=[ 108], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.400 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.400 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.400 | 99.00th=[ 203], 99.50th=[ 253], 99.90th=[ 296], 99.95th=[ 296], 00:13:55.400 | 99.99th=[ 296] 00:13:55.400 bw ( KiB/s): min=23552, max=25088, per=3.33%, avg=24678.40, stdev=514.69, samples=20 00:13:55.400 iops : min= 92, max= 98, avg=96.40, stdev= 2.01, samples=20 00:13:55.400 lat (msec) : 20=0.10%, 50=0.31%, 100=0.51%, 250=98.57%, 500=0.51% 00:13:55.400 cpu : usr=0.39%, sys=0.45%, ctx=985, majf=0, minf=1 00:13:55.400 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.400 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.400 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.400 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.400 job19: (groupid=0, jobs=1): err= 0: pid=71633: Thu Jul 25 17:04:46 2024 00:13:55.400 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10141msec); 0 zone resets 00:13:55.400 slat (usec): min=24, max=664, avg=58.85, stdev=23.51 00:13:55.400 clat (msec): min=18, max=288, avg=165.53, stdev=14.00 00:13:55.400 lat (msec): min=18, max=288, avg=165.58, stdev=14.01 00:13:55.400 clat percentiles (msec): 00:13:55.400 | 1.00th=[ 111], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.400 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.400 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.400 | 99.00th=[ 197], 99.50th=[ 247], 99.90th=[ 288], 99.95th=[ 288], 00:13:55.400 | 99.99th=[ 288] 00:13:55.400 bw ( KiB/s): min=23552, max=25088, per=3.33%, avg=24673.45, stdev=462.23, samples=20 00:13:55.400 iops : min= 92, max= 98, avg=96.30, stdev= 1.89, samples=20 00:13:55.400 lat (msec) : 20=0.10%, 50=0.31%, 100=0.41%, 250=98.77%, 500=0.41% 00:13:55.400 cpu : usr=0.31%, sys=0.48%, ctx=985, majf=0, minf=1 00:13:55.400 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.400 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.400 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.400 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.400 job20: (groupid=0, jobs=1): err= 0: pid=71634: Thu Jul 25 17:04:46 2024 00:13:55.400 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10143msec); 0 zone resets 00:13:55.400 slat (usec): min=24, max=276, avg=58.82, stdev=16.47 00:13:55.400 clat (msec): min=16, max=292, avg=165.56, stdev=14.41 00:13:55.400 lat (msec): min=16, max=292, avg=165.61, stdev=14.41 00:13:55.400 clat percentiles (msec): 00:13:55.400 | 1.00th=[ 109], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.400 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.400 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.400 | 99.00th=[ 201], 99.50th=[ 251], 99.90th=[ 292], 99.95th=[ 292], 00:13:55.400 | 99.99th=[ 292] 00:13:55.400 bw ( KiB/s): min=23552, max=25088, per=3.33%, avg=24675.85, stdev=512.67, samples=20 00:13:55.400 iops : min= 92, max= 98, avg=96.35, stdev= 1.98, samples=20 00:13:55.400 lat (msec) : 20=0.10%, 50=0.31%, 100=0.51%, 250=98.57%, 500=0.51% 00:13:55.400 cpu : usr=0.35%, sys=0.44%, ctx=981, majf=0, minf=1 00:13:55.400 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.400 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.400 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.400 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.400 job21: (groupid=0, jobs=1): err= 0: pid=71635: Thu Jul 25 17:04:46 2024 00:13:55.400 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10137msec); 0 zone resets 00:13:55.400 slat (usec): min=16, max=206, avg=61.03, stdev=15.71 00:13:55.400 clat (msec): min=19, max=283, avg=165.45, stdev=13.69 00:13:55.400 lat (msec): min=19, max=283, avg=165.51, stdev=13.69 00:13:55.400 clat percentiles (msec): 00:13:55.400 | 1.00th=[ 111], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.400 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 167], 00:13:55.401 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.401 | 99.00th=[ 192], 99.50th=[ 243], 99.90th=[ 284], 99.95th=[ 284], 00:13:55.401 | 99.99th=[ 284] 00:13:55.401 bw ( KiB/s): min=23552, max=25088, per=3.33%, avg=24675.85, stdev=455.68, samples=20 00:13:55.401 iops : min= 92, max= 98, avg=96.35, stdev= 1.76, samples=20 00:13:55.401 lat (msec) : 20=0.10%, 50=0.20%, 100=0.51%, 250=98.77%, 500=0.41% 00:13:55.401 cpu : usr=0.37%, sys=0.47%, ctx=990, majf=0, minf=1 00:13:55.401 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.401 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.401 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.401 job22: (groupid=0, jobs=1): err= 0: pid=71636: Thu Jul 25 17:04:46 2024 00:13:55.401 write: IOPS=98, BW=24.5MiB/s (25.7MB/s)(249MiB/10153msec); 0 zone resets 00:13:55.401 slat (usec): min=24, max=208, avg=59.82, stdev=15.44 00:13:55.401 clat (msec): min=4, max=303, avg=162.87, stdev=25.22 00:13:55.401 lat (msec): min=4, max=303, avg=162.93, stdev=25.22 00:13:55.401 clat percentiles (msec): 00:13:55.401 | 1.00th=[ 16], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.401 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.401 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.401 | 99.00th=[ 211], 99.50th=[ 264], 99.90th=[ 305], 99.95th=[ 305], 00:13:55.401 | 99.99th=[ 305] 00:13:55.401 bw ( KiB/s): min=23552, max=32768, per=3.38%, avg=25108.55, stdev=1854.18, samples=20 00:13:55.401 iops : min= 92, max= 128, avg=98.00, stdev= 7.26, samples=20 00:13:55.401 lat (msec) : 10=0.60%, 20=0.70%, 50=0.90%, 100=0.60%, 250=96.59% 00:13:55.401 lat (msec) : 500=0.60% 00:13:55.401 cpu : usr=0.34%, sys=0.46%, ctx=997, majf=0, minf=1 00:13:55.401 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.401 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.401 issued rwts: total=0,996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.401 job23: (groupid=0, jobs=1): err= 0: pid=71637: Thu Jul 25 17:04:46 2024 00:13:55.401 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10147msec); 0 zone resets 00:13:55.401 slat (usec): min=24, max=2915, avg=57.47, stdev=94.21 00:13:55.401 clat (msec): min=3, max=298, avg=165.40, stdev=16.07 00:13:55.401 lat (msec): min=6, max=298, avg=165.46, stdev=16.04 00:13:55.401 clat percentiles (msec): 00:13:55.401 | 1.00th=[ 93], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.401 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.401 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.401 | 99.00th=[ 207], 99.50th=[ 257], 99.90th=[ 300], 99.95th=[ 300], 00:13:55.401 | 99.99th=[ 300] 00:13:55.401 bw ( KiB/s): min=23552, max=25600, per=3.33%, avg=24698.95, stdev=464.93, samples=20 00:13:55.401 iops : min= 92, max= 100, avg=96.40, stdev= 1.82, samples=20 00:13:55.401 lat (msec) : 4=0.10%, 20=0.10%, 50=0.31%, 100=0.51%, 250=98.47% 00:13:55.401 lat (msec) : 500=0.51% 00:13:55.401 cpu : usr=0.31%, sys=0.35%, ctx=1023, majf=0, minf=1 00:13:55.401 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.401 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.401 issued rwts: total=0,980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.401 job24: (groupid=0, jobs=1): err= 0: pid=71638: Thu Jul 25 17:04:46 2024 00:13:55.401 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10143msec); 0 zone resets 00:13:55.401 slat (usec): min=26, max=282, avg=59.43, stdev=20.39 00:13:55.401 clat (msec): min=14, max=293, avg=165.54, stdev=14.59 00:13:55.401 lat (msec): min=15, max=293, avg=165.60, stdev=14.59 00:13:55.401 clat percentiles (msec): 00:13:55.401 | 1.00th=[ 108], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.401 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.401 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.401 | 99.00th=[ 201], 99.50th=[ 253], 99.90th=[ 292], 99.95th=[ 292], 00:13:55.401 | 99.99th=[ 292] 00:13:55.401 bw ( KiB/s): min=23552, max=25088, per=3.33%, avg=24675.85, stdev=455.68, samples=20 00:13:55.401 iops : min= 92, max= 98, avg=96.35, stdev= 1.76, samples=20 00:13:55.401 lat (msec) : 20=0.10%, 50=0.31%, 100=0.51%, 250=98.57%, 500=0.51% 00:13:55.401 cpu : usr=0.40%, sys=0.35%, ctx=1009, majf=0, minf=1 00:13:55.401 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.401 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.401 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.401 job25: (groupid=0, jobs=1): err= 0: pid=71639: Thu Jul 25 17:04:46 2024 00:13:55.401 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10140msec); 0 zone resets 00:13:55.401 slat (usec): min=30, max=1498, avg=67.89, stdev=49.69 00:13:55.401 clat (msec): min=19, max=286, avg=165.50, stdev=13.90 00:13:55.401 lat (msec): min=19, max=287, avg=165.57, stdev=13.90 00:13:55.401 clat percentiles (msec): 00:13:55.401 | 1.00th=[ 111], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.401 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.401 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.401 | 99.00th=[ 194], 99.50th=[ 247], 99.90th=[ 288], 99.95th=[ 288], 00:13:55.401 | 99.99th=[ 288] 00:13:55.401 bw ( KiB/s): min=23552, max=25600, per=3.33%, avg=24675.80, stdev=485.34, samples=20 00:13:55.401 iops : min= 92, max= 100, avg=96.30, stdev= 1.95, samples=20 00:13:55.401 lat (msec) : 20=0.10%, 50=0.31%, 100=0.41%, 250=98.77%, 500=0.41% 00:13:55.401 cpu : usr=0.36%, sys=0.45%, ctx=995, majf=0, minf=1 00:13:55.401 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.401 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.401 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.401 job26: (groupid=0, jobs=1): err= 0: pid=71640: Thu Jul 25 17:04:46 2024 00:13:55.401 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10141msec); 0 zone resets 00:13:55.401 slat (usec): min=14, max=1021, avg=67.14, stdev=34.30 00:13:55.401 clat (msec): min=18, max=288, avg=165.50, stdev=14.04 00:13:55.401 lat (msec): min=18, max=288, avg=165.57, stdev=14.03 00:13:55.401 clat percentiles (msec): 00:13:55.401 | 1.00th=[ 110], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.401 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.401 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.401 | 99.00th=[ 197], 99.50th=[ 247], 99.90th=[ 288], 99.95th=[ 288], 00:13:55.401 | 99.99th=[ 288] 00:13:55.401 bw ( KiB/s): min=23552, max=25088, per=3.33%, avg=24673.45, stdev=462.23, samples=20 00:13:55.401 iops : min= 92, max= 98, avg=96.30, stdev= 1.89, samples=20 00:13:55.401 lat (msec) : 20=0.10%, 50=0.31%, 100=0.51%, 250=98.67%, 500=0.41% 00:13:55.401 cpu : usr=0.34%, sys=0.56%, ctx=982, majf=0, minf=1 00:13:55.401 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.401 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.401 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.401 job27: (groupid=0, jobs=1): err= 0: pid=71641: Thu Jul 25 17:04:46 2024 00:13:55.401 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10141msec); 0 zone resets 00:13:55.401 slat (usec): min=23, max=1699, avg=62.67, stdev=63.93 00:13:55.401 clat (msec): min=17, max=286, avg=165.48, stdev=14.14 00:13:55.401 lat (msec): min=18, max=286, avg=165.54, stdev=14.11 00:13:55.401 clat percentiles (msec): 00:13:55.401 | 1.00th=[ 110], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.401 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 167], 00:13:55.401 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.401 | 99.00th=[ 194], 99.50th=[ 247], 99.90th=[ 288], 99.95th=[ 288], 00:13:55.401 | 99.99th=[ 288] 00:13:55.401 bw ( KiB/s): min=23552, max=25088, per=3.33%, avg=24678.25, stdev=452.40, samples=20 00:13:55.401 iops : min= 92, max= 98, avg=96.35, stdev= 1.76, samples=20 00:13:55.401 lat (msec) : 20=0.10%, 50=0.31%, 100=0.51%, 250=98.67%, 500=0.41% 00:13:55.401 cpu : usr=0.26%, sys=0.51%, ctx=1013, majf=0, minf=1 00:13:55.401 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.401 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.401 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.401 job28: (groupid=0, jobs=1): err= 0: pid=71642: Thu Jul 25 17:04:46 2024 00:13:55.401 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10157msec); 0 zone resets 00:13:55.401 slat (usec): min=26, max=19565, avg=76.03, stdev=623.78 00:13:55.401 clat (msec): min=7, max=301, avg=165.46, stdev=16.11 00:13:55.401 lat (msec): min=12, max=301, avg=165.54, stdev=15.92 00:13:55.401 clat percentiles (msec): 00:13:55.401 | 1.00th=[ 94], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.401 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.401 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.401 | 99.00th=[ 209], 99.50th=[ 262], 99.90th=[ 300], 99.95th=[ 300], 00:13:55.401 | 99.99th=[ 300] 00:13:55.401 bw ( KiB/s): min=23552, max=25600, per=3.33%, avg=24673.35, stdev=485.69, samples=20 00:13:55.401 iops : min= 92, max= 100, avg=96.30, stdev= 1.89, samples=20 00:13:55.401 lat (msec) : 10=0.10%, 20=0.10%, 50=0.31%, 100=0.51%, 250=98.37% 00:13:55.402 lat (msec) : 500=0.61% 00:13:55.402 cpu : usr=0.23%, sys=0.48%, ctx=1006, majf=0, minf=1 00:13:55.402 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.402 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.402 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.402 job29: (groupid=0, jobs=1): err= 0: pid=71643: Thu Jul 25 17:04:46 2024 00:13:55.402 write: IOPS=96, BW=24.1MiB/s (25.3MB/s)(245MiB/10143msec); 0 zone resets 00:13:55.402 slat (usec): min=23, max=248, avg=59.22, stdev=17.84 00:13:55.402 clat (msec): min=16, max=292, avg=165.55, stdev=14.43 00:13:55.402 lat (msec): min=16, max=292, avg=165.61, stdev=14.43 00:13:55.402 clat percentiles (msec): 00:13:55.402 | 1.00th=[ 109], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 163], 00:13:55.402 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 165], 00:13:55.402 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 171], 95.00th=[ 174], 00:13:55.402 | 99.00th=[ 201], 99.50th=[ 251], 99.90th=[ 292], 99.95th=[ 292], 00:13:55.402 | 99.99th=[ 292] 00:13:55.402 bw ( KiB/s): min=23552, max=25088, per=3.33%, avg=24675.85, stdev=512.67, samples=20 00:13:55.402 iops : min= 92, max= 98, avg=96.35, stdev= 1.98, samples=20 00:13:55.402 lat (msec) : 20=0.10%, 50=0.31%, 100=0.51%, 250=98.57%, 500=0.51% 00:13:55.402 cpu : usr=0.37%, sys=0.41%, ctx=985, majf=0, minf=1 00:13:55.402 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=98.5%, 32=0.0%, >=64=0.0% 00:13:55.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.402 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.402 issued rwts: total=0,979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:55.402 00:13:55.402 Run status group 0 (all jobs): 00:13:55.402 WRITE: bw=724MiB/s (760MB/s), 24.1MiB/s-24.5MiB/s (25.2MB/s-25.7MB/s), io=7358MiB (7715MB), run=10137-10157msec 00:13:55.402 00:13:55.402 Disk stats (read/write): 00:13:55.402 sda: ios=48/956, merge=0/0, ticks=168/157426, in_queue=157595, util=95.70% 00:13:55.402 sdb: ios=48/956, merge=0/0, ticks=175/157426, in_queue=157601, util=95.66% 00:13:55.402 sdc: ios=48/961, merge=0/0, ticks=152/157708, in_queue=157861, util=95.96% 00:13:55.402 sdd: ios=48/961, merge=0/0, ticks=165/157656, in_queue=157821, util=96.15% 00:13:55.402 sde: ios=48/977, merge=0/0, ticks=176/157986, in_queue=158162, util=96.51% 00:13:55.402 sdf: ios=48/974, merge=0/0, ticks=154/158022, in_queue=158176, util=96.45% 00:13:55.402 sdg: ios=48/957, merge=0/0, ticks=111/157472, in_queue=157584, util=96.07% 00:13:55.402 sdh: ios=44/955, merge=0/0, ticks=173/157293, in_queue=157466, util=96.60% 00:13:55.402 sdi: ios=37/955, merge=0/0, ticks=182/157280, in_queue=157461, util=96.48% 00:13:55.402 sdj: ios=29/955, merge=0/0, ticks=138/157284, in_queue=157423, util=96.55% 00:13:55.402 sdk: ios=25/956, merge=0/0, ticks=86/157370, in_queue=157457, util=96.61% 00:13:55.402 sdl: ios=14/956, merge=0/0, ticks=83/157427, in_queue=157509, util=96.47% 00:13:55.402 sdm: ios=19/956, merge=0/0, ticks=73/157410, in_queue=157483, util=96.59% 00:13:55.402 sdn: ios=0/960, merge=0/0, ticks=0/157530, in_queue=157530, util=96.69% 00:13:55.402 sdo: ios=0/957, merge=0/0, ticks=0/157508, in_queue=157508, util=96.59% 00:13:55.402 sdp: ios=0/955, merge=0/0, ticks=0/157279, in_queue=157279, util=96.98% 00:13:55.402 sdq: ios=0/960, merge=0/0, ticks=0/157677, in_queue=157676, util=97.26% 00:13:55.402 sdr: ios=0/955, merge=0/0, ticks=0/157284, in_queue=157284, util=97.33% 00:13:55.402 sds: ios=0/956, merge=0/0, ticks=0/157402, in_queue=157402, util=97.47% 00:13:55.402 sdt: ios=0/956, merge=0/0, ticks=0/157465, in_queue=157466, util=97.58% 00:13:55.402 sdu: ios=0/956, merge=0/0, ticks=0/157431, in_queue=157431, util=97.72% 00:13:55.402 sdv: ios=0/955, merge=0/0, ticks=0/157289, in_queue=157289, util=97.75% 00:13:55.402 sdw: ios=0/977, merge=0/0, ticks=0/158052, in_queue=158052, util=98.29% 00:13:55.402 sdx: ios=0/958, merge=0/0, ticks=0/157432, in_queue=157432, util=98.12% 00:13:55.402 sdy: ios=0/956, merge=0/0, ticks=0/157372, in_queue=157372, util=98.07% 00:13:55.402 sdz: ios=0/955, merge=0/0, ticks=0/157264, in_queue=157264, util=98.12% 00:13:55.402 sdaa: ios=0/956, merge=0/0, ticks=0/157439, in_queue=157439, util=98.29% 00:13:55.402 sdab: ios=0/956, merge=0/0, ticks=0/157359, in_queue=157359, util=98.27% 00:13:55.402 sdac: ios=0/959, merge=0/0, ticks=0/157602, in_queue=157602, util=98.56% 00:13:55.402 sdad: ios=0/956, merge=0/0, ticks=0/157413, in_queue=157414, util=98.73% 00:13:55.402 [2024-07-25 17:04:46.313807] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.402 17:04:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@79 -- # sync 00:13:55.402 17:04:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:13:55.402 17:04:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@83 -- # rm -f 00:13:55.402 Cleaning up iSCSI connection 00:13:55.402 17:04:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@84 -- # iscsicleanup 00:13:55.402 17:04:46 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:13:55.402 17:04:46 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:13:55.402 Logging out of session [sid: 33, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 34, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 35, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 36, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 37, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 38, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 39, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 40, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 41, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 42, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 43, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 44, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 45, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 46, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 47, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 48, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 49, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 50, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 51, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 52, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 53, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 54, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 55, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 56, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 57, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 58, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 59, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 60, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 61, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:13:55.402 Logging out of session [sid: 62, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:13:55.402 Logout of [sid: 33, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:13:55.402 Logout of [sid: 34, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:13:55.402 Logout of [sid: 35, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:13:55.402 Logout of [sid: 36, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:13:55.402 Logout of [sid: 37, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:13:55.402 Logout of [sid: 38, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:13:55.402 Logout of [sid: 39, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:13:55.402 Logout of [sid: 40, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 41, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 42, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 43, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 44, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 45, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 46, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 47, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 48, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 49, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 50, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 51, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 52, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 53, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 54, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 55, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 56, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 57, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 58, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 59, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 60, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 61, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:13:55.403 Logout of [sid: 62, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@985 -- # rm -rf 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@85 -- # remove_backends 00:13:55.403 INFO: Removing lvol bdevs 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@22 -- # echo 'INFO: Removing lvol bdevs' 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # seq 1 30 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_1 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_1 00:13:55.403 [2024-07-25 17:04:47.372147] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (9d133700-e5ad-4657-b0f6-42a132ee2745) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:55.403 INFO: lvol bdev lvs0/lbd_1 removed 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_1 removed' 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_2 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_2 00:13:55.403 [2024-07-25 17:04:47.575880] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (292ecd28-a242-4725-8295-63094981196f) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:55.403 INFO: lvol bdev lvs0/lbd_2 removed 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_2 removed' 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_3 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_3 00:13:55.403 [2024-07-25 17:04:47.791604] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (da1c6a39-5162-47aa-bb01-05941f7443a2) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:55.403 INFO: lvol bdev lvs0/lbd_3 removed 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_3 removed' 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_4 00:13:55.403 17:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_4 00:13:55.663 [2024-07-25 17:04:47.991348] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (ea2a2c5e-d7c4-439a-9e2e-270aca409d43) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:55.663 INFO: lvol bdev lvs0/lbd_4 removed 00:13:55.663 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_4 removed' 00:13:55.663 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:55.663 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_5 00:13:55.663 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_5 00:13:55.920 [2024-07-25 17:04:48.203103] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (2e398561-4589-48bc-b3f3-3b4d5ec98421) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:55.920 INFO: lvol bdev lvs0/lbd_5 removed 00:13:55.920 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_5 removed' 00:13:55.920 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:55.920 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_6 00:13:55.920 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_6 00:13:56.178 [2024-07-25 17:04:48.410820] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (94a830fc-d684-4dcb-a624-53532d86d75a) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:56.178 INFO: lvol bdev lvs0/lbd_6 removed 00:13:56.178 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_6 removed' 00:13:56.178 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:56.178 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_7 00:13:56.178 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_7 00:13:56.178 [2024-07-25 17:04:48.602555] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (91f479bf-265a-46f9-82ec-91091a797ef3) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:56.178 INFO: lvol bdev lvs0/lbd_7 removed 00:13:56.178 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_7 removed' 00:13:56.178 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:56.178 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_8 00:13:56.178 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_8 00:13:56.436 [2024-07-25 17:04:48.790342] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (3dedd105-e11b-4f93-8baf-583c87f514af) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:56.436 INFO: lvol bdev lvs0/lbd_8 removed 00:13:56.436 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_8 removed' 00:13:56.436 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:56.436 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_9 00:13:56.436 17:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_9 00:13:56.694 [2024-07-25 17:04:48.994276] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (330ad194-25ec-44c4-95d0-90f1591f7d09) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:56.694 INFO: lvol bdev lvs0/lbd_9 removed 00:13:56.694 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_9 removed' 00:13:56.694 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:56.694 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_10 00:13:56.694 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_10 00:13:56.952 [2024-07-25 17:04:49.174227] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (8215f815-1bd7-4ad2-8638-913cfe809b00) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:56.952 INFO: lvol bdev lvs0/lbd_10 removed 00:13:56.952 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_10 removed' 00:13:56.952 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:56.952 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_11 00:13:56.952 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_11 00:13:56.952 [2024-07-25 17:04:49.365985] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (679b0cc0-0d8c-4ddf-b2c9-cdff92492b50) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:56.952 INFO: lvol bdev lvs0/lbd_11 removed 00:13:56.952 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_11 removed' 00:13:56.952 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:56.952 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_12 00:13:56.952 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_12 00:13:57.210 [2024-07-25 17:04:49.545752] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (5477e787-47dc-4ec9-a8e4-fa048d3c1cfa) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:57.210 INFO: lvol bdev lvs0/lbd_12 removed 00:13:57.210 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_12 removed' 00:13:57.210 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:57.210 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_13 00:13:57.210 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_13 00:13:57.468 [2024-07-25 17:04:49.721519] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (33700d0b-3961-4760-b25e-c62b41888a92) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:57.468 INFO: lvol bdev lvs0/lbd_13 removed 00:13:57.468 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_13 removed' 00:13:57.468 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:57.468 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_14 00:13:57.468 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_14 00:13:57.468 [2024-07-25 17:04:49.901305] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (c0703189-44a0-4626-8446-82079f97f99d) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:57.468 INFO: lvol bdev lvs0/lbd_14 removed 00:13:57.468 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_14 removed' 00:13:57.468 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:57.468 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_15 00:13:57.468 17:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_15 00:13:57.727 [2024-07-25 17:04:50.109098] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (669c0af9-c253-4e02-ab1a-82a70c6ec228) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:57.727 INFO: lvol bdev lvs0/lbd_15 removed 00:13:57.727 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_15 removed' 00:13:57.727 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:57.727 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_16 00:13:57.727 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_16 00:13:57.985 [2024-07-25 17:04:50.300834] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (8ffb0399-4f4a-49ab-8c75-e209ea00ed5b) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:57.985 INFO: lvol bdev lvs0/lbd_16 removed 00:13:57.985 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_16 removed' 00:13:57.985 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:57.985 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_17 00:13:57.985 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_17 00:13:58.243 [2024-07-25 17:04:50.496582] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (3bc4fea5-84e2-468c-bb16-27c1971eda78) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:58.243 INFO: lvol bdev lvs0/lbd_17 removed 00:13:58.243 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_17 removed' 00:13:58.243 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:58.243 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_18 00:13:58.243 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_18 00:13:58.243 [2024-07-25 17:04:50.696326] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (a66cd048-6ceb-4e67-b6b6-202ca8711682) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:58.501 INFO: lvol bdev lvs0/lbd_18 removed 00:13:58.501 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_18 removed' 00:13:58.501 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:58.501 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_19 00:13:58.501 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_19 00:13:58.501 [2024-07-25 17:04:50.947994] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (9ffeec2f-a71f-4d57-85c6-9a7f43347dc4) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:58.501 INFO: lvol bdev lvs0/lbd_19 removed 00:13:58.501 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_19 removed' 00:13:58.501 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:58.501 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_20 00:13:58.501 17:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_20 00:13:58.760 [2024-07-25 17:04:51.159720] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (cefeee94-1200-4056-a98a-564411c9656f) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:58.760 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_20 removed' 00:13:58.760 INFO: lvol bdev lvs0/lbd_20 removed 00:13:58.760 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:58.760 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_21 00:13:58.760 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_21 00:13:59.018 [2024-07-25 17:04:51.359457] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (a0b19b4f-eb21-4b26-bdec-07ac611a070b) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:59.018 INFO: lvol bdev lvs0/lbd_21 removed 00:13:59.018 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_21 removed' 00:13:59.018 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:59.018 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_22 00:13:59.018 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_22 00:13:59.276 [2024-07-25 17:04:51.535240] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (190a840e-aa4f-4d94-a493-29015293b8c4) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:59.276 INFO: lvol bdev lvs0/lbd_22 removed 00:13:59.276 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_22 removed' 00:13:59.276 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:59.276 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_23 00:13:59.276 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_23 00:13:59.276 [2024-07-25 17:04:51.739055] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (586c2363-87da-4c84-beb1-396d8ee5e9cc) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:59.534 INFO: lvol bdev lvs0/lbd_23 removed 00:13:59.534 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_23 removed' 00:13:59.534 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:59.534 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_24 00:13:59.534 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_24 00:13:59.534 [2024-07-25 17:04:51.942792] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (3807995b-9d7a-4739-bf90-8e6c2a0027ea) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:59.534 INFO: lvol bdev lvs0/lbd_24 removed 00:13:59.534 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_24 removed' 00:13:59.534 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:59.534 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_25 00:13:59.534 17:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_25 00:13:59.792 [2024-07-25 17:04:52.146528] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (9c8016c6-7e96-4ea6-8182-ba155bf1e687) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:59.792 INFO: lvol bdev lvs0/lbd_25 removed 00:13:59.792 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_25 removed' 00:13:59.792 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:59.792 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_26 00:13:59.792 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_26 00:14:00.049 [2024-07-25 17:04:52.334312] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (6c1a1bee-2133-48dd-a62b-806d4ed7af37) received event(SPDK_BDEV_EVENT_REMOVE) 00:14:00.049 INFO: lvol bdev lvs0/lbd_26 removed 00:14:00.049 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_26 removed' 00:14:00.049 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:14:00.049 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_27 00:14:00.049 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_27 00:14:00.305 [2024-07-25 17:04:52.554214] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (7943ced5-f85b-4394-88db-e9bfcb94c3ef) received event(SPDK_BDEV_EVENT_REMOVE) 00:14:00.305 INFO: lvol bdev lvs0/lbd_27 removed 00:14:00.305 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_27 removed' 00:14:00.305 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:14:00.305 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_28 00:14:00.305 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_28 00:14:00.305 [2024-07-25 17:04:52.738011] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (1cb584d9-b230-4abc-92bc-005766094dad) received event(SPDK_BDEV_EVENT_REMOVE) 00:14:00.305 INFO: lvol bdev lvs0/lbd_28 removed 00:14:00.305 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_28 removed' 00:14:00.305 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:14:00.305 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_29 00:14:00.305 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_29 00:14:00.563 [2024-07-25 17:04:52.929752] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (9adc308b-3d30-480e-9f07-22197a1913c5) received event(SPDK_BDEV_EVENT_REMOVE) 00:14:00.563 INFO: lvol bdev lvs0/lbd_29 removed 00:14:00.563 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_29 removed' 00:14:00.563 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:14:00.563 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_30 00:14:00.563 17:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_30 00:14:00.821 [2024-07-25 17:04:53.101530] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (ce93fd0a-a0b6-4720-b48e-40ccaf6ec36b) received event(SPDK_BDEV_EVENT_REMOVE) 00:14:00.821 INFO: lvol bdev lvs0/lbd_30 removed 00:14:00.821 17:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_30 removed' 00:14:00.821 17:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@28 -- # sleep 1 00:14:01.759 INFO: Removing lvol stores 00:14:01.759 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@30 -- # echo 'INFO: Removing lvol stores' 00:14:01.759 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:14:02.017 INFO: lvol store lvs0 removed 00:14:02.017 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@32 -- # echo 'INFO: lvol store lvs0 removed' 00:14:02.017 INFO: Removing NVMe 00:14:02.017 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@34 -- # echo 'INFO: Removing NVMe' 00:14:02.017 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:14:02.277 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@37 -- # return 0 00:14:02.277 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@86 -- # killprocess 69794 00:14:02.277 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 69794 ']' 00:14:02.277 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # kill -0 69794 00:14:02.277 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@955 -- # uname 00:14:02.277 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:02.277 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69794 00:14:02.277 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:02.277 killing process with pid 69794 00:14:02.277 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:02.277 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69794' 00:14:02.277 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@969 -- # kill 69794 00:14:02.277 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@974 -- # wait 69794 00:14:02.537 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@87 -- # iscsitestfini 00:14:02.537 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:14:02.537 00:14:02.537 real 0m43.752s 00:14:02.537 user 0m51.647s 00:14:02.537 sys 0m14.574s 00:14:02.537 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:02.537 17:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:02.537 ************************************ 00:14:02.537 END TEST iscsi_tgt_multiconnection 00:14:02.537 ************************************ 00:14:02.537 17:04:54 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@46 -- # '[' 0 -eq 1 ']' 00:14:02.537 17:04:54 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@49 -- # '[' 1 -eq 1 ']' 00:14:02.537 17:04:54 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@50 -- # hash ceph 00:14:02.537 17:04:54 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@54 -- # run_test iscsi_tgt_rbd /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd/rbd.sh 00:14:02.537 17:04:54 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:02.537 17:04:54 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:02.537 17:04:54 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:14:02.537 ************************************ 00:14:02.537 START TEST iscsi_tgt_rbd 00:14:02.537 ************************************ 00:14:02.537 17:04:54 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd/rbd.sh 00:14:02.797 * Looking for test storage... 00:14:02.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@11 -- # iscsitestinit 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@13 -- # timing_enter rbd_setup 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@14 -- # rbd_setup 10.0.0.1 spdk_iscsi_ns 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1007 -- # '[' -z 10.0.0.1 ']' 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1011 -- # '[' -n spdk_iscsi_ns ']' 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1012 -- # ip netns list 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1012 -- # grep spdk_iscsi_ns 00:14:02.797 spdk_iscsi_ns (id: 0) 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1013 -- # NS_CMD='ip netns exec spdk_iscsi_ns' 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1020 -- # hash ceph 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1021 -- # export PG_NUM=128 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1021 -- # PG_NUM=128 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1022 -- # export RBD_POOL=rbd 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1022 -- # RBD_POOL=rbd 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1023 -- # export RBD_NAME=foo 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1023 -- # RBD_NAME=foo 00:14:02.797 17:04:55 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1024 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:14:02.797 + base_dir=/var/tmp/ceph 00:14:02.797 + image=/var/tmp/ceph/ceph_raw.img 00:14:02.797 + dev=/dev/loop200 00:14:02.797 + pkill -9 ceph 00:14:02.797 + sleep 3 00:14:06.141 + umount /dev/loop200p2 00:14:06.142 umount: /dev/loop200p2: no mount point specified. 00:14:06.142 + losetup -d /dev/loop200 00:14:06.142 losetup: /dev/loop200: failed to use device: No such device 00:14:06.142 + rm -rf /var/tmp/ceph 00:14:06.142 17:04:58 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1025 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 10.0.0.1 00:14:06.142 + set -e 00:14:06.142 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:14:06.142 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:14:06.142 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:14:06.142 + base_dir=/var/tmp/ceph 00:14:06.142 + mon_ip=10.0.0.1 00:14:06.142 + mon_dir=/var/tmp/ceph/mon.a 00:14:06.142 + pid_dir=/var/tmp/ceph/pid 00:14:06.142 + ceph_conf=/var/tmp/ceph/ceph.conf 00:14:06.142 + mnt_dir=/var/tmp/ceph/mnt 00:14:06.142 + image=/var/tmp/ceph_raw.img 00:14:06.142 + dev=/dev/loop200 00:14:06.142 + modprobe loop 00:14:06.142 + umount /dev/loop200p2 00:14:06.142 umount: /dev/loop200p2: no mount point specified. 00:14:06.142 + true 00:14:06.142 + losetup -d /dev/loop200 00:14:06.142 losetup: /dev/loop200: failed to use device: No such device 00:14:06.142 + true 00:14:06.142 + '[' -d /var/tmp/ceph ']' 00:14:06.142 + mkdir /var/tmp/ceph 00:14:06.142 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:14:06.142 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:14:06.142 + fallocate -l 4G /var/tmp/ceph_raw.img 00:14:06.142 + mknod /dev/loop200 b 7 200 00:14:06.142 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:14:06.142 + PARTED='parted -s' 00:14:06.142 + SGDISK=sgdisk 00:14:06.142 + echo 'Partitioning /dev/loop200' 00:14:06.142 Partitioning /dev/loop200 00:14:06.142 + parted -s /dev/loop200 mktable gpt 00:14:06.142 + sleep 2 00:14:08.039 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:14:08.039 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:14:08.039 + partno=0 00:14:08.039 + echo 'Setting name on /dev/loop200' 00:14:08.039 Setting name on /dev/loop200 00:14:08.039 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:14:08.972 Warning: The kernel is still using the old partition table. 00:14:08.972 The new table will be used at the next reboot or after you 00:14:08.972 run partprobe(8) or kpartx(8) 00:14:08.972 The operation has completed successfully. 00:14:08.972 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:14:10.344 Warning: The kernel is still using the old partition table. 00:14:10.344 The new table will be used at the next reboot or after you 00:14:10.344 run partprobe(8) or kpartx(8) 00:14:10.344 The operation has completed successfully. 00:14:10.344 + kpartx /dev/loop200 00:14:10.344 loop200p1 : 0 4192256 /dev/loop200 2048 00:14:10.344 loop200p2 : 0 4192256 /dev/loop200 4194304 00:14:10.344 ++ ceph -v 00:14:10.344 ++ awk '{print $3}' 00:14:10.344 + ceph_version=17.2.7 00:14:10.344 + ceph_maj=17 00:14:10.344 + '[' 17 -gt 12 ']' 00:14:10.344 + update_config=true 00:14:10.344 + rm -f /var/log/ceph/ceph-mon.a.log 00:14:10.344 + set_min_mon_release='--set-min-mon-release 14' 00:14:10.344 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:14:10.344 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:14:10.344 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:14:10.344 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:14:10.344 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:14:10.344 = sectsz=512 attr=2, projid32bit=1 00:14:10.344 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:10.344 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:10.344 data = bsize=4096 blocks=524032, imaxpct=25 00:14:10.344 = sunit=0 swidth=0 blks 00:14:10.344 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:10.344 log =internal log bsize=4096 blocks=16384, version=2 00:14:10.344 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:10.344 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:10.344 Discarding blocks...Done. 00:14:10.344 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:14:10.344 + cat 00:14:10.344 + rm -rf '/var/tmp/ceph/mon.a/*' 00:14:10.344 + mkdir -p /var/tmp/ceph/mon.a 00:14:10.344 + mkdir -p /var/tmp/ceph/pid 00:14:10.344 + rm -f /etc/ceph/ceph.client.admin.keyring 00:14:10.344 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:14:10.344 creating /var/tmp/ceph/keyring 00:14:10.344 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:14:10.344 + monmaptool --create --clobber --add a 10.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:14:10.344 monmaptool: monmap file /var/tmp/ceph/monmap 00:14:10.344 monmaptool: generated fsid 061c7868-1354-4192-861e-4ba367a3aa7f 00:14:10.344 setting min_mon_release = octopus 00:14:10.344 epoch 0 00:14:10.344 fsid 061c7868-1354-4192-861e-4ba367a3aa7f 00:14:10.344 last_changed 2024-07-25T17:05:02.701376+0000 00:14:10.344 created 2024-07-25T17:05:02.701376+0000 00:14:10.344 min_mon_release 15 (octopus) 00:14:10.344 election_strategy: 1 00:14:10.344 0: v2:10.0.0.1:12046/0 mon.a 00:14:10.344 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:14:10.344 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:14:10.602 + '[' true = true ']' 00:14:10.602 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:14:10.602 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:14:10.602 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:14:10.602 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:14:10.602 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:14:10.602 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:14:10.602 ++ hostname 00:14:10.602 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:14:10.602 + true 00:14:10.602 + '[' true = true ']' 00:14:10.602 + ceph-conf --name mon.a --show-config-value log_file 00:14:10.602 /var/log/ceph/ceph-mon.a.log 00:14:10.602 ++ grep id 00:14:10.602 ++ ceph -s 00:14:10.602 ++ awk '{print $2}' 00:14:10.859 + fsid=061c7868-1354-4192-861e-4ba367a3aa7f 00:14:10.859 + sed -i 's/perf = true/perf = true\n\tfsid = 061c7868-1354-4192-861e-4ba367a3aa7f \n/g' /var/tmp/ceph/ceph.conf 00:14:10.859 + (( ceph_maj < 18 )) 00:14:10.859 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:14:10.859 + cat /var/tmp/ceph/ceph.conf 00:14:10.859 [global] 00:14:10.859 debug_lockdep = 0/0 00:14:10.859 debug_context = 0/0 00:14:10.859 debug_crush = 0/0 00:14:10.859 debug_buffer = 0/0 00:14:10.859 debug_timer = 0/0 00:14:10.859 debug_filer = 0/0 00:14:10.859 debug_objecter = 0/0 00:14:10.859 debug_rados = 0/0 00:14:10.859 debug_rbd = 0/0 00:14:10.859 debug_ms = 0/0 00:14:10.859 debug_monc = 0/0 00:14:10.859 debug_tp = 0/0 00:14:10.859 debug_auth = 0/0 00:14:10.859 debug_finisher = 0/0 00:14:10.859 debug_heartbeatmap = 0/0 00:14:10.859 debug_perfcounter = 0/0 00:14:10.859 debug_asok = 0/0 00:14:10.860 debug_throttle = 0/0 00:14:10.860 debug_mon = 0/0 00:14:10.860 debug_paxos = 0/0 00:14:10.860 debug_rgw = 0/0 00:14:10.860 00:14:10.860 perf = true 00:14:10.860 osd objectstore = filestore 00:14:10.860 00:14:10.860 fsid = 061c7868-1354-4192-861e-4ba367a3aa7f 00:14:10.860 00:14:10.860 mutex_perf_counter = false 00:14:10.860 throttler_perf_counter = false 00:14:10.860 rbd cache = false 00:14:10.860 mon_allow_pool_delete = true 00:14:10.860 00:14:10.860 osd_pool_default_size = 1 00:14:10.860 00:14:10.860 [mon] 00:14:10.860 mon_max_pool_pg_num=166496 00:14:10.860 mon_osd_max_split_count = 10000 00:14:10.860 mon_pg_warn_max_per_osd = 10000 00:14:10.860 00:14:10.860 [osd] 00:14:10.860 osd_op_threads = 64 00:14:10.860 filestore_queue_max_ops=5000 00:14:10.860 filestore_queue_committing_max_ops=5000 00:14:10.860 journal_max_write_entries=1000 00:14:10.860 journal_queue_max_ops=3000 00:14:10.860 objecter_inflight_ops=102400 00:14:10.860 filestore_wbthrottle_enable=false 00:14:10.860 filestore_queue_max_bytes=1048576000 00:14:10.860 filestore_queue_committing_max_bytes=1048576000 00:14:10.860 journal_max_write_bytes=1048576000 00:14:10.860 journal_queue_max_bytes=1048576000 00:14:10.860 ms_dispatch_throttle_bytes=1048576000 00:14:10.860 objecter_inflight_op_bytes=1048576000 00:14:10.860 filestore_max_sync_interval=10 00:14:10.860 osd_client_message_size_cap = 0 00:14:10.860 osd_client_message_cap = 0 00:14:10.860 osd_enable_op_tracker = false 00:14:10.860 filestore_fd_cache_size = 10240 00:14:10.860 filestore_fd_cache_shards = 64 00:14:10.860 filestore_op_threads = 16 00:14:10.860 osd_op_num_shards = 48 00:14:10.860 osd_op_num_threads_per_shard = 2 00:14:10.860 osd_pg_object_context_cache_count = 10240 00:14:10.860 filestore_odsync_write = True 00:14:10.860 journal_dynamic_throttle = True 00:14:10.860 00:14:10.860 [osd.0] 00:14:10.860 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:14:10.860 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:14:10.860 00:14:10.860 # add mon address 00:14:10.860 [mon.a] 00:14:10.860 mon addr = v2:10.0.0.1:12046 00:14:10.860 + i=0 00:14:10.860 + mkdir -p /var/tmp/ceph/mnt 00:14:10.860 ++ uuidgen 00:14:10.860 + uuid=07230767-212e-4004-87ac-2744f08df10b 00:14:10.860 + ceph -c /var/tmp/ceph/ceph.conf osd create 07230767-212e-4004-87ac-2744f08df10b 0 00:14:11.119 0 00:14:11.119 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid 07230767-212e-4004-87ac-2744f08df10b --check-needs-journal --no-mon-config 00:14:11.378 2024-07-25T17:05:03.619+0000 7f61b145e400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:14:11.378 2024-07-25T17:05:03.619+0000 7f61b145e400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:14:11.378 2024-07-25T17:05:03.675+0000 7f61b145e400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 07230767-212e-4004-87ac-2744f08df10b, invalid (someone else's?) journal 00:14:11.378 2024-07-25T17:05:03.709+0000 7f61b145e400 -1 journal do_read_entry(4096): bad header magic 00:14:11.378 2024-07-25T17:05:03.709+0000 7f61b145e400 -1 journal do_read_entry(4096): bad header magic 00:14:11.378 ++ hostname 00:14:11.378 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:14:12.747 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:14:12.747 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:14:13.005 added key for osd.0 00:14:13.005 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:14:13.262 + class_dir=/lib64/rados-classes 00:14:13.262 + [[ -e /lib64/rados-classes ]] 00:14:13.262 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:14:13.520 + pkill -9 ceph-osd 00:14:13.520 + true 00:14:13.520 + sleep 2 00:14:15.423 + mkdir -p /var/tmp/ceph/pid 00:14:15.423 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:14:15.423 2024-07-25T17:05:07.869+0000 7f1201282400 -1 Falling back to public interface 00:14:15.681 2024-07-25T17:05:07.916+0000 7f1201282400 -1 journal do_read_entry(8192): bad header magic 00:14:15.681 2024-07-25T17:05:07.916+0000 7f1201282400 -1 journal do_read_entry(8192): bad header magic 00:14:15.681 2024-07-25T17:05:07.936+0000 7f1201282400 -1 osd.0 0 log_to_monitors true 00:14:15.682 17:05:08 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1027 -- # ip netns exec spdk_iscsi_ns ceph osd pool create rbd 128 00:14:16.668 pool 'rbd' created 00:14:16.668 17:05:09 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1028 -- # ip netns exec spdk_iscsi_ns rbd create foo --size 1000 00:14:23.231 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@15 -- # trap 'rbd_cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:23.231 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@16 -- # timing_exit rbd_setup 00:14:23.231 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:23.231 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:23.231 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:14:23.231 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@20 -- # timing_enter start_iscsi_tgt 00:14:23.231 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:23.231 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:23.231 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@23 -- # pid=73013 00:14:23.231 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@22 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:14:23.231 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@25 -- # trap 'killprocess $pid; rbd_cleanup; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:14:23.231 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@27 -- # waitforlisten 73013 00:14:23.231 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@831 -- # '[' -z 73013 ']' 00:14:23.232 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.232 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:23.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.232 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.232 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:23.232 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:23.232 [2024-07-25 17:05:15.189872] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:23.232 [2024-07-25 17:05:15.189947] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73013 ] 00:14:23.232 [2024-07-25 17:05:15.331006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.232 [2024-07-25 17:05:15.423538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.232 [2024-07-25 17:05:15.423722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.232 [2024-07-25 17:05:15.423913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.232 [2024-07-25 17:05:15.423987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.800 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:23.800 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@864 -- # return 0 00:14:23.800 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@28 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:14:23.800 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.800 17:05:15 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:23.800 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.800 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@29 -- # rpc_cmd framework_start_init 00:14:23.800 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.800 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:23.800 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.800 iscsi_tgt is listening. Running tests... 00:14:23.801 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@30 -- # echo 'iscsi_tgt is listening. Running tests...' 00:14:23.801 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@32 -- # timing_exit start_iscsi_tgt 00:14:23.801 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:23.801 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:23.801 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@34 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:14:23.801 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.801 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:23.801 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.801 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@35 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:14:23.801 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.801 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@36 -- # rpc_cmd bdev_rbd_register_cluster iscsi_rbd_cluster --key-file /etc/ceph/ceph.client.admin.keyring --config-file /etc/ceph/ceph.conf 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@36 -- # rbd_cluster_name=iscsi_rbd_cluster 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@37 -- # rpc_cmd bdev_rbd_get_clusters_info -b iscsi_rbd_cluster 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:24.060 { 00:14:24.060 "cluster_name": "iscsi_rbd_cluster", 00:14:24.060 "config_file": "/etc/ceph/ceph.conf", 00:14:24.060 "key_file": "/etc/ceph/ceph.client.admin.keyring" 00:14:24.060 } 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@38 -- # rpc_cmd bdev_rbd_create rbd foo 4096 -c iscsi_rbd_cluster 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:24.060 [2024-07-25 17:05:16.343287] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@38 -- # rbd_bdev=Ceph0 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@39 -- # rpc_cmd bdev_get_bdevs 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:24.060 [ 00:14:24.060 { 00:14:24.060 "name": "Ceph0", 00:14:24.060 "aliases": [ 00:14:24.060 "4e3bdec8-ecaa-4b0e-820c-a05df960c9a4" 00:14:24.060 ], 00:14:24.060 "product_name": "Ceph Rbd Disk", 00:14:24.060 "block_size": 4096, 00:14:24.060 "num_blocks": 256000, 00:14:24.060 "uuid": "4e3bdec8-ecaa-4b0e-820c-a05df960c9a4", 00:14:24.060 "assigned_rate_limits": { 00:14:24.060 "rw_ios_per_sec": 0, 00:14:24.060 "rw_mbytes_per_sec": 0, 00:14:24.060 "r_mbytes_per_sec": 0, 00:14:24.060 "w_mbytes_per_sec": 0 00:14:24.060 }, 00:14:24.060 "claimed": false, 00:14:24.060 "zoned": false, 00:14:24.060 "supported_io_types": { 00:14:24.060 "read": true, 00:14:24.060 "write": true, 00:14:24.060 "unmap": true, 00:14:24.060 "flush": true, 00:14:24.060 "reset": true, 00:14:24.060 "nvme_admin": false, 00:14:24.060 "nvme_io": false, 00:14:24.060 "nvme_io_md": false, 00:14:24.060 "write_zeroes": true, 00:14:24.060 "zcopy": false, 00:14:24.060 "get_zone_info": false, 00:14:24.060 "zone_management": false, 00:14:24.060 "zone_append": false, 00:14:24.060 "compare": false, 00:14:24.060 "compare_and_write": true, 00:14:24.060 "abort": false, 00:14:24.060 "seek_hole": false, 00:14:24.060 "seek_data": false, 00:14:24.060 "copy": false, 00:14:24.060 "nvme_iov_md": false 00:14:24.060 }, 00:14:24.060 "driver_specific": { 00:14:24.060 "rbd": { 00:14:24.060 "pool_name": "rbd", 00:14:24.060 "rbd_name": "foo", 00:14:24.060 "config_file": "/etc/ceph/ceph.conf", 00:14:24.060 "key_file": "/etc/ceph/ceph.client.admin.keyring" 00:14:24.060 } 00:14:24.060 } 00:14:24.060 } 00:14:24.060 ] 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@41 -- # rpc_cmd bdev_rbd_resize Ceph0 2000 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:24.060 true 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # grep num_blocks 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # sed 's/[^[:digit:]]//g' 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # rpc_cmd bdev_get_bdevs 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # num_block=512000 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@44 -- # total_size=2000 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@45 -- # '[' 2000 '!=' 2000 ']' 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@53 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Ceph0:0 1:2 64 -d 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.060 17:05:16 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@54 -- # sleep 1 00:14:24.998 17:05:17 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@56 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:14:25.257 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:14:25.257 17:05:17 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@57 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:14:25.258 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:14:25.258 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:14:25.258 17:05:17 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@58 -- # waitforiscsidevices 1 00:14:25.258 17:05:17 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@116 -- # local num=1 00:14:25.258 17:05:17 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:14:25.258 17:05:17 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:14:25.258 [2024-07-25 17:05:17.520200] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:25.258 17:05:17 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:14:25.258 17:05:17 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:14:25.258 17:05:17 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # n=1 00:14:25.258 17:05:17 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:14:25.258 17:05:17 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@123 -- # return 0 00:14:25.258 17:05:17 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@60 -- # trap 'iscsicleanup; killprocess $pid; rbd_cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:25.258 17:05:17 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:14:25.258 [global] 00:14:25.258 thread=1 00:14:25.258 invalidate=1 00:14:25.258 rw=randrw 00:14:25.258 time_based=1 00:14:25.258 runtime=1 00:14:25.258 ioengine=libaio 00:14:25.258 direct=1 00:14:25.258 bs=4096 00:14:25.258 iodepth=1 00:14:25.258 norandommap=0 00:14:25.258 numjobs=1 00:14:25.258 00:14:25.258 verify_dump=1 00:14:25.258 verify_backlog=512 00:14:25.258 verify_state_save=0 00:14:25.258 do_verify=1 00:14:25.258 verify=crc32c-intel 00:14:25.258 [job0] 00:14:25.258 filename=/dev/sda 00:14:25.258 queue_depth set to 113 (sda) 00:14:25.258 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:25.258 fio-3.35 00:14:25.258 Starting 1 thread 00:14:25.258 [2024-07-25 17:05:17.713313] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:26.635 [2024-07-25 17:05:18.827297] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:26.635 00:14:26.635 job0: (groupid=0, jobs=1): err= 0: pid=73138: Thu Jul 25 17:05:18 2024 00:14:26.635 read: IOPS=50, BW=203KiB/s (208kB/s)(204KiB/1003msec) 00:14:26.635 slat (nsec): min=14689, max=93756, avg=37266.06, stdev=14870.28 00:14:26.635 clat (usec): min=146, max=1414, avg=345.18, stdev=260.69 00:14:26.635 lat (usec): min=168, max=1457, avg=382.45, stdev=264.32 00:14:26.635 clat percentiles (usec): 00:14:26.635 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 172], 20.00th=[ 194], 00:14:26.635 | 30.00th=[ 208], 40.00th=[ 239], 50.00th=[ 249], 60.00th=[ 297], 00:14:26.635 | 70.00th=[ 351], 80.00th=[ 379], 90.00th=[ 676], 95.00th=[ 979], 00:14:26.635 | 99.00th=[ 1418], 99.50th=[ 1418], 99.90th=[ 1418], 99.95th=[ 1418], 00:14:26.635 | 99.99th=[ 1418] 00:14:26.635 bw ( KiB/s): min= 184, max= 224, per=100.00%, avg=204.00, stdev=28.28, samples=2 00:14:26.635 iops : min= 46, max= 56, avg=51.00, stdev= 7.07, samples=2 00:14:26.635 write: IOPS=57, BW=231KiB/s (237kB/s)(232KiB/1003msec); 0 zone resets 00:14:26.635 slat (nsec): min=13697, max=82785, avg=41932.71, stdev=13671.71 00:14:26.635 clat (usec): min=5140, max=44594, avg=16884.84, stdev=5688.12 00:14:26.635 lat (usec): min=5171, max=44655, avg=16926.77, stdev=5692.44 00:14:26.635 clat percentiles (usec): 00:14:26.635 | 1.00th=[ 5145], 5.00th=[ 6587], 10.00th=[10421], 20.00th=[14615], 00:14:26.635 | 30.00th=[15664], 40.00th=[16188], 50.00th=[16581], 60.00th=[17433], 00:14:26.635 | 70.00th=[17957], 80.00th=[19006], 90.00th=[22152], 95.00th=[26084], 00:14:26.635 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:14:26.635 | 99.99th=[44827] 00:14:26.635 bw ( KiB/s): min= 216, max= 240, per=98.57%, avg=228.00, stdev=16.97, samples=2 00:14:26.635 iops : min= 54, max= 60, avg=57.00, stdev= 4.24, samples=2 00:14:26.635 lat (usec) : 250=23.85%, 500=16.51%, 750=2.75%, 1000=1.83% 00:14:26.635 lat (msec) : 2=1.83%, 10=3.67%, 20=41.28%, 50=8.26% 00:14:26.635 cpu : usr=0.10%, sys=0.60%, ctx=110, majf=0, minf=1 00:14:26.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:26.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.635 issued rwts: total=51,58,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:26.635 00:14:26.635 Run status group 0 (all jobs): 00:14:26.635 READ: bw=203KiB/s (208kB/s), 203KiB/s-203KiB/s (208kB/s-208kB/s), io=204KiB (209kB), run=1003-1003msec 00:14:26.635 WRITE: bw=231KiB/s (237kB/s), 231KiB/s-231KiB/s (237kB/s-237kB/s), io=232KiB (238kB), run=1003-1003msec 00:14:26.635 00:14:26.635 Disk stats (read/write): 00:14:26.635 sda: ios=89/50, merge=0/0, ticks=19/845, in_queue=865, util=90.78% 00:14:26.635 17:05:18 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:14:26.635 [global] 00:14:26.635 thread=1 00:14:26.635 invalidate=1 00:14:26.635 rw=randrw 00:14:26.635 time_based=1 00:14:26.635 runtime=1 00:14:26.635 ioengine=libaio 00:14:26.635 direct=1 00:14:26.635 bs=131072 00:14:26.635 iodepth=32 00:14:26.635 norandommap=0 00:14:26.635 numjobs=1 00:14:26.635 00:14:26.635 verify_dump=1 00:14:26.635 verify_backlog=512 00:14:26.635 verify_state_save=0 00:14:26.635 do_verify=1 00:14:26.635 verify=crc32c-intel 00:14:26.635 [job0] 00:14:26.635 filename=/dev/sda 00:14:26.635 queue_depth set to 113 (sda) 00:14:26.635 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:14:26.635 fio-3.35 00:14:26.635 Starting 1 thread 00:14:26.635 [2024-07-25 17:05:19.050290] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:28.540 [2024-07-25 17:05:20.861821] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:28.540 00:14:28.540 job0: (groupid=0, jobs=1): err= 0: pid=73184: Thu Jul 25 17:05:20 2024 00:14:28.540 read: IOPS=63, BW=8151KiB/s (8347kB/s)(13.5MiB/1696msec) 00:14:28.540 slat (usec): min=11, max=360, avg=40.62, stdev=38.07 00:14:28.540 clat (usec): min=204, max=55253, avg=1982.41, stdev=5583.11 00:14:28.540 lat (usec): min=228, max=55284, avg=2023.03, stdev=5579.53 00:14:28.540 clat percentiles (usec): 00:14:28.540 | 1.00th=[ 206], 5.00th=[ 221], 10.00th=[ 233], 20.00th=[ 277], 00:14:28.540 | 30.00th=[ 326], 40.00th=[ 375], 50.00th=[ 445], 60.00th=[ 529], 00:14:28.540 | 70.00th=[ 1172], 80.00th=[ 2114], 90.00th=[ 6390], 95.00th=[ 6587], 00:14:28.540 | 99.00th=[ 7570], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:14:28.540 | 99.99th=[55313] 00:14:28.540 bw ( KiB/s): min= 5120, max=22528, per=100.00%, avg=13824.00, stdev=12309.31, samples=2 00:14:28.540 iops : min= 40, max= 176, avg=108.00, stdev=96.17, samples=2 00:14:28.540 write: IOPS=66, BW=8528KiB/s (8733kB/s)(14.1MiB/1696msec); 0 zone resets 00:14:28.540 slat (usec): min=40, max=776, avg=125.76, stdev=76.69 00:14:28.540 clat (msec): min=22, max=1387, avg=471.59, stdev=422.96 00:14:28.540 lat (msec): min=22, max=1387, avg=471.71, stdev=422.96 00:14:28.540 clat percentiles (msec): 00:14:28.540 | 1.00th=[ 23], 5.00th=[ 45], 10.00th=[ 77], 20.00th=[ 134], 00:14:28.540 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 451], 00:14:28.540 | 70.00th=[ 684], 80.00th=[ 902], 90.00th=[ 1183], 95.00th=[ 1368], 00:14:28.540 | 99.00th=[ 1385], 99.50th=[ 1385], 99.90th=[ 1385], 99.95th=[ 1385], 00:14:28.540 | 99.99th=[ 1385] 00:14:28.540 bw ( KiB/s): min= 256, max=15616, per=82.04%, avg=6997.33, stdev=7850.20, samples=3 00:14:28.540 iops : min= 2, max= 122, avg=54.67, stdev=61.33, samples=3 00:14:28.540 lat (usec) : 250=6.33%, 500=21.27%, 750=4.07%, 1000=1.36% 00:14:28.540 lat (msec) : 2=4.98%, 4=3.62%, 10=6.79%, 50=2.71%, 100=4.98% 00:14:28.540 lat (msec) : 250=19.46%, 500=4.98%, 750=6.33%, 1000=4.52%, 2000=8.60% 00:14:28.540 cpu : usr=0.47%, sys=0.59%, ctx=216, majf=0, minf=1 00:14:28.540 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=86.0%, >=64=0.0% 00:14:28.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.540 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.5%, 64=0.0%, >=64=0.0% 00:14:28.541 issued rwts: total=108,113,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:28.541 latency : target=0, window=0, percentile=100.00%, depth=32 00:14:28.541 00:14:28.541 Run status group 0 (all jobs): 00:14:28.541 READ: bw=8151KiB/s (8347kB/s), 8151KiB/s-8151KiB/s (8347kB/s-8347kB/s), io=13.5MiB (14.2MB), run=1696-1696msec 00:14:28.541 WRITE: bw=8528KiB/s (8733kB/s), 8528KiB/s-8528KiB/s (8733kB/s-8733kB/s), io=14.1MiB (14.8MB), run=1696-1696msec 00:14:28.541 00:14:28.541 Disk stats (read/write): 00:14:28.541 sda: ios=156/109, merge=0/0, ticks=187/40506, in_queue=40694, util=93.38% 00:14:28.541 17:05:20 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@65 -- # rm -f ./local-job0-0-verify.state 00:14:28.541 Cleaning up iSCSI connection 00:14:28.541 17:05:20 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@67 -- # trap - SIGINT SIGTERM EXIT 00:14:28.541 17:05:20 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@69 -- # iscsicleanup 00:14:28.541 17:05:20 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:14:28.541 17:05:20 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:14:28.541 Logging out of session [sid: 63, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:14:28.541 Logout of [sid: 63, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:14:28.541 17:05:20 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:14:28.541 17:05:20 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@985 -- # rm -rf 00:14:28.541 17:05:20 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@70 -- # rpc_cmd bdev_rbd_delete Ceph0 00:14:28.541 17:05:20 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.541 17:05:20 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:28.541 [2024-07-25 17:05:20.991844] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Ceph0) received event(SPDK_BDEV_EVENT_REMOVE) 00:14:28.799 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.799 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@71 -- # rpc_cmd bdev_rbd_unregister_cluster iscsi_rbd_cluster 00:14:28.799 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.799 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:28.799 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.799 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@72 -- # killprocess 73013 00:14:28.799 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@950 -- # '[' -z 73013 ']' 00:14:28.799 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@954 -- # kill -0 73013 00:14:28.799 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@955 -- # uname 00:14:28.799 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:28.799 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73013 00:14:28.799 killing process with pid 73013 00:14:28.800 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:28.800 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:28.800 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73013' 00:14:28.800 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@969 -- # kill 73013 00:14:28.800 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@974 -- # wait 73013 00:14:29.058 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@73 -- # rbd_cleanup 00:14:29.058 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1033 -- # hash ceph 00:14:29.058 17:05:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1034 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:14:29.058 + base_dir=/var/tmp/ceph 00:14:29.058 + image=/var/tmp/ceph/ceph_raw.img 00:14:29.058 + dev=/dev/loop200 00:14:29.058 + pkill -9 ceph 00:14:29.058 + sleep 3 00:14:32.339 + umount /dev/loop200p2 00:14:32.339 umount: /dev/loop200p2: not mounted. 00:14:32.339 + losetup -d /dev/loop200 00:14:32.339 + rm -rf /var/tmp/ceph 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1035 -- # rm -f /var/tmp/ceph_raw.img 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@75 -- # iscsitestfini 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:14:32.339 00:14:32.339 real 0m29.538s 00:14:32.339 user 0m25.711s 00:14:32.339 sys 0m2.086s 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:14:32.339 ************************************ 00:14:32.339 END TEST iscsi_tgt_rbd 00:14:32.339 ************************************ 00:14:32.339 17:05:24 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@57 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:14:32.339 17:05:24 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@59 -- # '[' 1 -eq 1 ']' 00:14:32.339 17:05:24 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@60 -- # run_test iscsi_tgt_initiator /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator/initiator.sh 00:14:32.339 17:05:24 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:32.339 17:05:24 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.339 17:05:24 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:14:32.339 ************************************ 00:14:32.339 START TEST iscsi_tgt_initiator 00:14:32.339 ************************************ 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator/initiator.sh 00:14:32.339 * Looking for test storage... 00:14:32.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@11 -- # iscsitestinit 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@13 -- # MALLOC_BDEV_SIZE=64 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@16 -- # timing_enter start_iscsi_tgt 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@19 -- # pid=73311 00:14:32.339 iSCSI target launched. pid: 73311 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@20 -- # echo 'iSCSI target launched. pid: 73311' 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@18 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@21 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@22 -- # waitforlisten 73311 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@831 -- # '[' -z 73311 ']' 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:32.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:32.339 17:05:24 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:14:32.339 [2024-07-25 17:05:24.763439] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:32.339 [2024-07-25 17:05:24.763517] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73311 ] 00:14:32.598 [2024-07-25 17:05:25.014985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.856 [2024-07-25 17:05:25.092654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@864 -- # return 0 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@23 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@24 -- # rpc_cmd framework_start_init 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.424 iscsi_tgt is listening. Running tests... 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@25 -- # echo 'iscsi_tgt is listening. Running tests...' 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@27 -- # timing_exit start_iscsi_tgt 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@29 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@30 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@31 -- # rpc_cmd bdev_malloc_create 64 512 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:14:33.424 Malloc0 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@36 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.424 17:05:25 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@37 -- # sleep 1 00:14:34.361 17:05:26 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@38 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:14:34.361 17:05:26 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 5 -s 512 00:14:34.361 17:05:26 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@40 -- # initiator_json_config 00:14:34.361 17:05:26 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:14:34.621 [2024-07-25 17:05:26.887240] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:34.621 [2024-07-25 17:05:26.887318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73355 ] 00:14:34.880 [2024-07-25 17:05:27.132877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.880 [2024-07-25 17:05:27.210862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.880 Running I/O for 5 seconds... 00:14:40.155 00:14:40.155 Latency(us) 00:14:40.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.155 Job: iSCSI0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:40.155 Verification LBA range: start 0x0 length 0x4000 00:14:40.155 iSCSI0 : 5.00 22244.72 86.89 0.00 0.00 5731.47 1177.81 4316.43 00:14:40.155 =================================================================================================================== 00:14:40.155 Total : 22244.72 86.89 0.00 0.00 5731.47 1177.81 4316.43 00:14:40.155 17:05:32 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w unmap -t 5 -s 512 00:14:40.155 17:05:32 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@41 -- # initiator_json_config 00:14:40.155 17:05:32 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:14:40.155 [2024-07-25 17:05:32.509597] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:40.155 [2024-07-25 17:05:32.509662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73433 ] 00:14:40.413 [2024-07-25 17:05:32.752950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.413 [2024-07-25 17:05:32.829942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.413 Running I/O for 5 seconds... 00:14:45.682 00:14:45.682 Latency(us) 00:14:45.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.682 Job: iSCSI0 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:14:45.682 iSCSI0 : 5.00 51702.68 201.96 0.00 0.00 2472.79 861.97 2645.13 00:14:45.682 =================================================================================================================== 00:14:45.682 Total : 51702.68 201.96 0.00 0.00 2472.79 861.97 2645.13 00:14:45.682 17:05:38 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w flush -t 5 -s 512 00:14:45.682 17:05:38 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@42 -- # initiator_json_config 00:14:45.682 17:05:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:14:45.682 [2024-07-25 17:05:38.126852] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:45.682 [2024-07-25 17:05:38.126925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73495 ] 00:14:45.940 [2024-07-25 17:05:38.371885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.244 [2024-07-25 17:05:38.449682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.244 Running I/O for 5 seconds... 00:14:51.515 00:14:51.515 Latency(us) 00:14:51.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.515 Job: iSCSI0 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:14:51.515 iSCSI0 : 5.00 75224.85 293.85 0.00 0.00 1699.51 585.61 2934.64 00:14:51.515 =================================================================================================================== 00:14:51.515 Total : 75224.85 293.85 0.00 0.00 1699.51 585.61 2934.64 00:14:51.515 17:05:43 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w reset -t 10 -s 512 00:14:51.515 17:05:43 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@43 -- # initiator_json_config 00:14:51.515 17:05:43 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:14:51.515 [2024-07-25 17:05:43.745164] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:51.515 [2024-07-25 17:05:43.745230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73560 ] 00:14:51.774 [2024-07-25 17:05:43.987559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.774 [2024-07-25 17:05:44.063536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.774 Running I/O for 10 seconds... 00:15:01.780 00:15:01.780 Latency(us) 00:15:01.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.780 Job: iSCSI0 (Core Mask 0x1, workload: reset, depth: 128, IO size: 4096) 00:15:01.780 Verification LBA range: start 0x0 length 0x4000 00:15:01.780 iSCSI0 : 10.00 22979.12 89.76 0.00 0.00 5549.07 1039.63 3816.35 00:15:01.780 =================================================================================================================== 00:15:01.780 Total : 22979.12 89.76 0.00 0.00 5549.07 1039.63 3816.35 00:15:02.039 17:05:54 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:15:02.039 17:05:54 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@47 -- # killprocess 73311 00:15:02.039 17:05:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@950 -- # '[' -z 73311 ']' 00:15:02.039 17:05:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@954 -- # kill -0 73311 00:15:02.039 17:05:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@955 -- # uname 00:15:02.039 17:05:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:02.039 17:05:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73311 00:15:02.039 17:05:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:02.039 17:05:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:02.039 killing process with pid 73311 00:15:02.039 17:05:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73311' 00:15:02.039 17:05:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@969 -- # kill 73311 00:15:02.039 17:05:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@974 -- # wait 73311 00:15:02.298 17:05:54 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@49 -- # iscsitestfini 00:15:02.298 17:05:54 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:15:02.298 00:15:02.298 real 0m30.111s 00:15:02.298 user 0m41.968s 00:15:02.298 sys 0m11.521s 00:15:02.298 17:05:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:02.298 17:05:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:15:02.298 ************************************ 00:15:02.298 END TEST iscsi_tgt_initiator 00:15:02.298 ************************************ 00:15:02.298 17:05:54 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@61 -- # run_test iscsi_tgt_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait/bdev_io_wait.sh 00:15:02.298 17:05:54 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:02.298 17:05:54 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:02.298 17:05:54 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:15:02.298 ************************************ 00:15:02.298 START TEST iscsi_tgt_bdev_io_wait 00:15:02.298 ************************************ 00:15:02.298 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait/bdev_io_wait.sh 00:15:02.557 * Looking for test storage... 00:15:02.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait 00:15:02.557 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:15:02.557 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:15:02.557 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:15:02.557 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:15:02.557 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:15:02.557 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:15:02.557 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:15:02.557 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:15:02.557 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:15:02.557 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:15:02.557 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@11 -- # iscsitestinit 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@16 -- # timing_enter start_iscsi_tgt 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@19 -- # pid=73726 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@18 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@20 -- # echo 'iSCSI target launched. pid: 73726' 00:15:02.558 iSCSI target launched. pid: 73726 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@21 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@22 -- # waitforlisten 73726 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 73726 ']' 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.558 17:05:54 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:02.558 [2024-07-25 17:05:54.943000] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:02.558 [2024-07-25 17:05:54.943070] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73726 ] 00:15:02.817 [2024-07-25 17:05:55.177999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.817 [2024-07-25 17:05:55.251496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.385 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.385 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:15:03.385 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@23 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:15:03.385 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.385 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.385 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.385 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@25 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:03.385 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.385 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.385 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.385 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@26 -- # rpc_cmd framework_start_init 00:15:03.385 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.385 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.645 iscsi_tgt is listening. Running tests... 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@27 -- # echo 'iscsi_tgt is listening. Running tests...' 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@29 -- # timing_exit start_iscsi_tgt 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@31 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@32 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@33 -- # rpc_cmd bdev_malloc_create 64 512 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.645 Malloc0 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@38 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.645 17:05:55 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@39 -- # sleep 1 00:15:04.582 17:05:56 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@40 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:15:04.582 17:05:56 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w write -t 1 00:15:04.582 17:05:56 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@42 -- # initiator_json_config 00:15:04.582 17:05:56 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:15:04.582 [2024-07-25 17:05:57.045037] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:04.582 [2024-07-25 17:05:57.045117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73770 ] 00:15:04.841 [2024-07-25 17:05:57.184924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.841 [2024-07-25 17:05:57.261675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.100 Running I/O for 1 seconds... 00:15:06.037 00:15:06.037 Latency(us) 00:15:06.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.037 Job: iSCSI0 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:15:06.037 iSCSI0 : 1.00 39704.06 155.09 0.00 0.00 3217.25 1112.01 3921.63 00:15:06.037 =================================================================================================================== 00:15:06.037 Total : 39704.06 155.09 0.00 0.00 3217.25 1112.01 3921.63 00:15:06.295 17:05:58 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w read -t 1 00:15:06.295 17:05:58 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@43 -- # initiator_json_config 00:15:06.295 17:05:58 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:15:06.295 [2024-07-25 17:05:58.598297] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:06.295 [2024-07-25 17:05:58.598553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73785 ] 00:15:06.295 [2024-07-25 17:05:58.737210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.554 [2024-07-25 17:05:58.830277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.554 Running I/O for 1 seconds... 00:15:07.497 00:15:07.497 Latency(us) 00:15:07.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.497 Job: iSCSI0 (Core Mask 0x1, workload: read, depth: 128, IO size: 4096) 00:15:07.497 iSCSI0 : 1.00 48425.06 189.16 0.00 0.00 2637.61 694.18 3092.56 00:15:07.497 =================================================================================================================== 00:15:07.497 Total : 48425.06 189.16 0.00 0.00 2637.61 694.18 3092.56 00:15:07.756 17:06:00 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@44 -- # initiator_json_config 00:15:07.756 17:06:00 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w flush -t 1 00:15:07.756 17:06:00 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:15:07.756 [2024-07-25 17:06:00.172386] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:07.756 [2024-07-25 17:06:00.172604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73812 ] 00:15:08.015 [2024-07-25 17:06:00.312026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.015 [2024-07-25 17:06:00.403173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.275 Running I/O for 1 seconds... 00:15:09.211 00:15:09.212 Latency(us) 00:15:09.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.212 Job: iSCSI0 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:15:09.212 iSCSI0 : 1.00 59797.98 233.59 0.00 0.00 2136.60 562.58 2553.01 00:15:09.212 =================================================================================================================== 00:15:09.212 Total : 59797.98 233.59 0.00 0.00 2136.60 562.58 2553.01 00:15:09.470 17:06:01 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@45 -- # initiator_json_config 00:15:09.470 17:06:01 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:15:09.470 17:06:01 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w unmap -t 1 00:15:09.470 [2024-07-25 17:06:01.744348] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:09.470 [2024-07-25 17:06:01.744413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73827 ] 00:15:09.470 [2024-07-25 17:06:01.883446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.729 [2024-07-25 17:06:01.967225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.729 Running I/O for 1 seconds... 00:15:10.665 00:15:10.665 Latency(us) 00:15:10.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.665 Job: iSCSI0 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:15:10.665 iSCSI0 : 1.00 43773.66 170.99 0.00 0.00 2918.42 861.97 3658.44 00:15:10.665 =================================================================================================================== 00:15:10.665 Total : 43773.66 170.99 0.00 0.00 2918.42 861.97 3658.44 00:15:10.925 17:06:03 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:15:10.925 17:06:03 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@49 -- # killprocess 73726 00:15:10.925 17:06:03 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 73726 ']' 00:15:10.925 17:06:03 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 73726 00:15:10.925 17:06:03 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:15:10.925 17:06:03 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:10.925 17:06:03 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73726 00:15:10.925 killing process with pid 73726 00:15:10.925 17:06:03 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:10.925 17:06:03 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:10.925 17:06:03 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73726' 00:15:10.925 17:06:03 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 73726 00:15:10.925 17:06:03 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 73726 00:15:11.183 17:06:03 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@51 -- # iscsitestfini 00:15:11.183 17:06:03 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:15:11.183 00:15:11.183 real 0m8.860s 00:15:11.183 user 0m11.566s 00:15:11.183 sys 0m2.973s 00:15:11.183 17:06:03 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:11.183 ************************************ 00:15:11.183 END TEST iscsi_tgt_bdev_io_wait 00:15:11.183 ************************************ 00:15:11.183 17:06:03 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:11.442 17:06:03 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@62 -- # run_test iscsi_tgt_resize /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize/resize.sh 00:15:11.442 17:06:03 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:11.442 17:06:03 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:11.442 17:06:03 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:15:11.442 ************************************ 00:15:11.442 START TEST iscsi_tgt_resize 00:15:11.442 ************************************ 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize/resize.sh 00:15:11.442 * Looking for test storage... 00:15:11.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@12 -- # iscsitestinit 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@14 -- # BDEV_SIZE=64 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@15 -- # BDEV_NEW_SIZE=128 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@16 -- # BLOCK_SIZE=512 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@17 -- # RESIZE_SOCK=/var/tmp/spdk-resize.sock 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@19 -- # timing_enter start_iscsi_tgt 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@22 -- # rm -f /var/tmp/spdk-resize.sock 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:15:11.442 iSCSI target launched. pid: 73899 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@25 -- # pid=73899 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@26 -- # echo 'iSCSI target launched. pid: 73899' 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@27 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@28 -- # waitforlisten 73899 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@831 -- # '[' -z 73899 ']' 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:11.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:11.442 17:06:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:15:11.442 [2024-07-25 17:06:03.879333] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:11.442 [2024-07-25 17:06:03.879435] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73899 ] 00:15:11.700 [2024-07-25 17:06:04.139924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.958 [2024-07-25 17:06:04.217715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@864 -- # return 0 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@29 -- # rpc_cmd framework_start_init 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.526 iscsi_tgt is listening. Running tests... 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@30 -- # echo 'iscsi_tgt is listening. Running tests...' 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@32 -- # timing_exit start_iscsi_tgt 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@34 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@35 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@36 -- # rpc_cmd bdev_null_create Null0 64 512 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:15:12.526 Null0 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@41 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Null0:0 1:2 256 -d 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.526 17:06:04 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@42 -- # sleep 1 00:15:13.464 17:06:05 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@43 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:15:13.464 17:06:05 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@47 -- # bdevperf_pid=73942 00:15:13.464 17:06:05 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@48 -- # waitforlisten 73942 /var/tmp/spdk-resize.sock 00:15:13.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-resize.sock... 00:15:13.464 17:06:05 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@831 -- # '[' -z 73942 ']' 00:15:13.464 17:06:05 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-resize.sock 00:15:13.464 17:06:05 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:13.464 17:06:05 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-resize.sock...' 00:15:13.464 17:06:05 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:13.464 17:06:05 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:15:13.464 17:06:05 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@46 -- # initiator_json_config 00:15:13.464 17:06:05 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-resize.sock --json /dev/fd/63 -q 16 -o 4096 -w read -t 5 -R -s 128 -z 00:15:13.464 17:06:05 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@139 -- # jq . 00:15:13.723 [2024-07-25 17:06:05.989519] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:13.723 [2024-07-25 17:06:05.989589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 128 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73942 ] 00:15:13.723 [2024-07-25 17:06:06.154201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.983 [2024-07-25 17:06:06.230775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.551 17:06:06 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.551 17:06:06 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@864 -- # return 0 00:15:14.551 17:06:06 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@50 -- # rpc_cmd bdev_null_resize Null0 128 00:15:14.551 17:06:06 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.551 17:06:06 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:15:14.551 [2024-07-25 17:06:06.810176] lun.c: 402:bdev_event_cb: *NOTICE*: bdev name (Null0) received event(SPDK_BDEV_EVENT_RESIZE) 00:15:14.551 true 00:15:14.551 17:06:06 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.551 17:06:06 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # rpc_cmd -s /var/tmp/spdk-resize.sock bdev_get_bdevs 00:15:14.551 17:06:06 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.551 17:06:06 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:15:14.551 17:06:06 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # jq '.[].num_blocks' 00:15:14.551 17:06:06 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.551 17:06:06 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # num_block=131072 00:15:14.551 17:06:06 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@54 -- # total_size=64 00:15:14.551 17:06:06 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@55 -- # '[' 64 '!=' 64 ']' 00:15:14.551 17:06:06 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@59 -- # sleep 2 00:15:16.457 17:06:08 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@61 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-resize.sock perform_tests 00:15:16.716 Running I/O for 5 seconds... 00:15:21.986 00:15:21.986 Latency(us) 00:15:21.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.986 Job: iSCSI0 (Core Mask 0x1, workload: read, depth: 16, IO size: 4096) 00:15:21.986 iSCSI0 : 5.00 56652.42 221.30 0.00 0.00 280.09 172.72 644.83 00:15:21.986 =================================================================================================================== 00:15:21.986 Total : 56652.42 221.30 0.00 0.00 280.09 172.72 644.83 00:15:21.986 0 00:15:21.986 17:06:13 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # rpc_cmd -s /var/tmp/spdk-resize.sock bdev_get_bdevs 00:15:21.986 17:06:13 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # jq '.[].num_blocks' 00:15:21.986 17:06:13 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.987 17:06:13 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:15:21.987 17:06:13 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # num_block=262144 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@65 -- # total_size=128 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@66 -- # '[' 128 '!=' 128 ']' 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@72 -- # killprocess 73942 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@950 -- # '[' -z 73942 ']' 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # kill -0 73942 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@955 -- # uname 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73942 00:15:21.987 killing process with pid 73942 00:15:21.987 Received shutdown signal, test time was about 5.000000 seconds 00:15:21.987 00:15:21.987 Latency(us) 00:15:21.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.987 =================================================================================================================== 00:15:21.987 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73942' 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@969 -- # kill 73942 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@974 -- # wait 73942 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@73 -- # killprocess 73899 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@950 -- # '[' -z 73899 ']' 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # kill -0 73899 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@955 -- # uname 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73899 00:15:21.987 killing process with pid 73899 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73899' 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@969 -- # kill 73899 00:15:21.987 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@974 -- # wait 73899 00:15:22.365 17:06:14 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@75 -- # iscsitestfini 00:15:22.365 17:06:14 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:15:22.365 00:15:22.365 real 0m10.930s 00:15:22.365 user 0m15.776s 00:15:22.365 sys 0m3.487s 00:15:22.365 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:22.365 ************************************ 00:15:22.365 END TEST iscsi_tgt_resize 00:15:22.365 ************************************ 00:15:22.365 17:06:14 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:15:22.365 17:06:14 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@65 -- # cleanup_veth_interfaces 00:15:22.365 17:06:14 iscsi_tgt -- iscsi_tgt/common.sh@95 -- # ip link set init_br nomaster 00:15:22.365 17:06:14 iscsi_tgt -- iscsi_tgt/common.sh@96 -- # ip link set tgt_br nomaster 00:15:22.365 17:06:14 iscsi_tgt -- iscsi_tgt/common.sh@97 -- # ip link set tgt_br2 nomaster 00:15:22.365 17:06:14 iscsi_tgt -- iscsi_tgt/common.sh@98 -- # ip link set init_br down 00:15:22.365 17:06:14 iscsi_tgt -- iscsi_tgt/common.sh@99 -- # ip link set tgt_br down 00:15:22.365 17:06:14 iscsi_tgt -- iscsi_tgt/common.sh@100 -- # ip link set tgt_br2 down 00:15:22.365 17:06:14 iscsi_tgt -- iscsi_tgt/common.sh@101 -- # ip link delete iscsi_br type bridge 00:15:22.365 17:06:14 iscsi_tgt -- iscsi_tgt/common.sh@102 -- # ip link delete spdk_init_int 00:15:22.365 17:06:14 iscsi_tgt -- iscsi_tgt/common.sh@103 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:15:22.656 17:06:14 iscsi_tgt -- iscsi_tgt/common.sh@104 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:15:22.656 17:06:14 iscsi_tgt -- iscsi_tgt/common.sh@105 -- # ip netns del spdk_iscsi_ns 00:15:22.656 17:06:14 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:15:22.656 ************************************ 00:15:22.656 END TEST iscsi_tgt 00:15:22.656 ************************************ 00:15:22.656 00:15:22.656 real 7m14.975s 00:15:22.656 user 13m8.129s 00:15:22.656 sys 1m55.740s 00:15:22.656 17:06:14 iscsi_tgt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:22.656 17:06:14 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:15:22.656 17:06:14 -- spdk/autotest.sh@268 -- # run_test spdkcli_iscsi /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:15:22.656 17:06:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:22.656 17:06:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:22.656 17:06:14 -- common/autotest_common.sh@10 -- # set +x 00:15:22.656 ************************************ 00:15:22.656 START TEST spdkcli_iscsi 00:15:22.656 ************************************ 00:15:22.656 17:06:14 spdkcli_iscsi -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:15:22.656 * Looking for test storage... 00:15:22.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:15:22.656 17:06:15 spdkcli_iscsi -- spdkcli/iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:15:22.656 17:06:15 spdkcli_iscsi -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:15:22.656 17:06:15 spdkcli_iscsi -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:15:22.656 17:06:15 spdkcli_iscsi -- spdkcli/iscsi.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:15:22.656 17:06:15 spdkcli_iscsi -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:15:22.656 17:06:15 spdkcli_iscsi -- spdkcli/iscsi.sh@12 -- # MATCH_FILE=spdkcli_iscsi.test 00:15:22.656 17:06:15 spdkcli_iscsi -- spdkcli/iscsi.sh@13 -- # SPDKCLI_BRANCH=/iscsi 00:15:22.656 17:06:15 spdkcli_iscsi -- spdkcli/iscsi.sh@15 -- # trap cleanup EXIT 00:15:22.656 17:06:15 spdkcli_iscsi -- spdkcli/iscsi.sh@17 -- # timing_enter run_iscsi_tgt 00:15:22.656 17:06:15 spdkcli_iscsi -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:22.656 17:06:15 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:15:22.656 17:06:15 spdkcli_iscsi -- spdkcli/iscsi.sh@21 -- # iscsi_tgt_pid=74151 00:15:22.656 17:06:15 spdkcli_iscsi -- spdkcli/iscsi.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x3 -p 0 --wait-for-rpc 00:15:22.656 17:06:15 spdkcli_iscsi -- spdkcli/iscsi.sh@22 -- # waitforlisten 74151 00:15:22.656 17:06:15 spdkcli_iscsi -- common/autotest_common.sh@831 -- # '[' -z 74151 ']' 00:15:22.656 17:06:15 spdkcli_iscsi -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.656 17:06:15 spdkcli_iscsi -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:22.656 17:06:15 spdkcli_iscsi -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.656 17:06:15 spdkcli_iscsi -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:22.656 17:06:15 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:15:22.914 [2024-07-25 17:06:15.141847] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:22.914 [2024-07-25 17:06:15.142139] [ DPDK EAL parameters: iscsi --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74151 ] 00:15:22.914 [2024-07-25 17:06:15.283323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:23.172 [2024-07-25 17:06:15.383264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.172 [2024-07-25 17:06:15.383267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.739 17:06:15 spdkcli_iscsi -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:23.739 17:06:15 spdkcli_iscsi -- common/autotest_common.sh@864 -- # return 0 00:15:23.739 17:06:15 spdkcli_iscsi -- spdkcli/iscsi.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:23.997 17:06:16 spdkcli_iscsi -- spdkcli/iscsi.sh@25 -- # timing_exit run_iscsi_tgt 00:15:23.997 17:06:16 spdkcli_iscsi -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:23.997 17:06:16 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:15:23.997 17:06:16 spdkcli_iscsi -- spdkcli/iscsi.sh@27 -- # timing_enter spdkcli_create_iscsi_config 00:15:23.997 17:06:16 spdkcli_iscsi -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:23.997 17:06:16 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:15:23.997 17:06:16 spdkcli_iscsi -- spdkcli/iscsi.sh@48 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc0'\'' '\''Malloc0'\'' True 00:15:23.997 '\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:15:23.997 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:15:23.997 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:15:23.998 '\''/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"'\'' '\''host=127.0.0.1, port=3261'\'' True 00:15:23.998 '\''/iscsi/portal_groups create 2 127.0.0.1:3262'\'' '\''host=127.0.0.1, port=3262'\'' True 00:15:23.998 '\''/iscsi/initiator_groups create 2 ANY 10.0.2.15/32'\'' '\''hostname=ANY, netmask=10.0.2.15/32'\'' True 00:15:23.998 '\''/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32'\'' '\''hostname=ANZ, netmask=10.0.2.15/32'\'' True 00:15:23.998 '\''/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32'\'' '\''hostname=ANW, netmask=10.0.2.16'\'' True 00:15:23.998 '\''/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1'\'' '\''Target0'\'' True 00:15:23.998 '\''/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1'\'' '\''Target1'\'' True 00:15:23.998 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' True 00:15:23.998 '\''/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2'\'' '\''Malloc3'\'' True 00:15:23.998 '\''/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"'\'' '\''user=test3'\'' True 00:15:23.998 '\''/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2'\'' '\''user=test2'\'' True 00:15:23.998 '\''/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"'\'' '\''user=test4'\'' True 00:15:23.998 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true'\'' '\''disable_chap: True'\'' True 00:15:23.998 '\''/iscsi/global_params set_auth g=1 d=true r=false'\'' '\''disable_chap: True'\'' True 00:15:23.998 '\''/iscsi ls'\'' '\''Malloc'\'' True 00:15:23.998 ' 00:15:32.157 Executing command: ['/bdevs/malloc create 32 512 Malloc0', 'Malloc0', True] 00:15:32.157 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:15:32.157 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:15:32.157 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:15:32.157 Executing command: ['/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"', 'host=127.0.0.1, port=3261', True] 00:15:32.157 Executing command: ['/iscsi/portal_groups create 2 127.0.0.1:3262', 'host=127.0.0.1, port=3262', True] 00:15:32.157 Executing command: ['/iscsi/initiator_groups create 2 ANY 10.0.2.15/32', 'hostname=ANY, netmask=10.0.2.15/32', True] 00:15:32.157 Executing command: ['/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32', 'hostname=ANZ, netmask=10.0.2.15/32', True] 00:15:32.157 Executing command: ['/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32', 'hostname=ANW, netmask=10.0.2.16', True] 00:15:32.157 Executing command: ['/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1', 'Target0', True] 00:15:32.157 Executing command: ['/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1', 'Target1', True] 00:15:32.157 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', True] 00:15:32.157 Executing command: ['/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2', 'Malloc3', True] 00:15:32.157 Executing command: ['/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"', 'user=test3', True] 00:15:32.157 Executing command: ['/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2', 'user=test2', True] 00:15:32.158 Executing command: ['/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"', 'user=test4', True] 00:15:32.158 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true', 'disable_chap: True', True] 00:15:32.158 Executing command: ['/iscsi/global_params set_auth g=1 d=true r=false', 'disable_chap: True', True] 00:15:32.158 Executing command: ['/iscsi ls', 'Malloc', True] 00:15:32.158 17:06:23 spdkcli_iscsi -- spdkcli/iscsi.sh@49 -- # timing_exit spdkcli_create_iscsi_config 00:15:32.158 17:06:23 spdkcli_iscsi -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:32.158 17:06:23 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:15:32.158 17:06:23 spdkcli_iscsi -- spdkcli/iscsi.sh@51 -- # timing_enter spdkcli_check_match 00:15:32.158 17:06:23 spdkcli_iscsi -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:32.158 17:06:23 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:15:32.158 17:06:23 spdkcli_iscsi -- spdkcli/iscsi.sh@52 -- # check_match 00:15:32.158 17:06:23 spdkcli_iscsi -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /iscsi 00:15:32.158 17:06:24 spdkcli_iscsi -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test.match 00:15:32.158 17:06:24 spdkcli_iscsi -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test 00:15:32.158 17:06:24 spdkcli_iscsi -- spdkcli/iscsi.sh@53 -- # timing_exit spdkcli_check_match 00:15:32.158 17:06:24 spdkcli_iscsi -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:32.158 17:06:24 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:15:32.158 17:06:24 spdkcli_iscsi -- spdkcli/iscsi.sh@55 -- # timing_enter spdkcli_clear_iscsi_config 00:15:32.158 17:06:24 spdkcli_iscsi -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:32.158 17:06:24 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:15:32.158 17:06:24 spdkcli_iscsi -- spdkcli/iscsi.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/iscsi/auth_groups delete_secret 1 test2'\'' '\''user=test2'\'' 00:15:32.158 '\''/iscsi/auth_groups delete_secret_all 1'\'' '\''user=test1'\'' 00:15:32.158 '\''/iscsi/auth_groups delete 1'\'' '\''user=test1'\'' 00:15:32.158 '\''/iscsi/auth_groups delete_all'\'' '\''user=test4'\'' 00:15:32.158 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' 00:15:32.158 '\''/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1'\'' '\''Target1'\'' 00:15:32.158 '\''/iscsi/target_nodes delete_all'\'' '\''Target0'\'' 00:15:32.158 '\''/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32'\'' '\''ANW'\'' 00:15:32.158 '\''/iscsi/initiator_groups delete 3'\'' '\''ANZ'\'' 00:15:32.158 '\''/iscsi/initiator_groups delete_all'\'' '\''ANY'\'' 00:15:32.158 '\''/iscsi/portal_groups delete 1'\'' '\''127.0.0.1:3261'\'' 00:15:32.158 '\''/iscsi/portal_groups delete_all'\'' '\''127.0.0.1:3262'\'' 00:15:32.158 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:15:32.158 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:15:32.158 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:15:32.158 '\''/bdevs/malloc delete Malloc0'\'' '\''Malloc0'\'' 00:15:32.158 ' 00:15:38.757 Executing command: ['/iscsi/auth_groups delete_secret 1 test2', 'user=test2', False] 00:15:38.757 Executing command: ['/iscsi/auth_groups delete_secret_all 1', 'user=test1', False] 00:15:38.757 Executing command: ['/iscsi/auth_groups delete 1', 'user=test1', False] 00:15:38.757 Executing command: ['/iscsi/auth_groups delete_all', 'user=test4', False] 00:15:38.757 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', False] 00:15:38.757 Executing command: ['/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1', 'Target1', False] 00:15:38.757 Executing command: ['/iscsi/target_nodes delete_all', 'Target0', False] 00:15:38.757 Executing command: ['/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32', 'ANW', False] 00:15:38.757 Executing command: ['/iscsi/initiator_groups delete 3', 'ANZ', False] 00:15:38.757 Executing command: ['/iscsi/initiator_groups delete_all', 'ANY', False] 00:15:38.757 Executing command: ['/iscsi/portal_groups delete 1', '127.0.0.1:3261', False] 00:15:38.757 Executing command: ['/iscsi/portal_groups delete_all', '127.0.0.1:3262', False] 00:15:38.757 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:15:38.757 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:15:38.757 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:15:38.757 Executing command: ['/bdevs/malloc delete Malloc0', 'Malloc0', False] 00:15:38.757 17:06:30 spdkcli_iscsi -- spdkcli/iscsi.sh@73 -- # timing_exit spdkcli_clear_iscsi_config 00:15:38.757 17:06:30 spdkcli_iscsi -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:38.757 17:06:30 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:15:38.757 17:06:30 spdkcli_iscsi -- spdkcli/iscsi.sh@75 -- # killprocess 74151 00:15:38.757 17:06:30 spdkcli_iscsi -- common/autotest_common.sh@950 -- # '[' -z 74151 ']' 00:15:38.757 17:06:30 spdkcli_iscsi -- common/autotest_common.sh@954 -- # kill -0 74151 00:15:38.757 17:06:30 spdkcli_iscsi -- common/autotest_common.sh@955 -- # uname 00:15:38.757 17:06:30 spdkcli_iscsi -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:38.757 17:06:30 spdkcli_iscsi -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74151 00:15:38.757 killing process with pid 74151 00:15:38.757 17:06:31 spdkcli_iscsi -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:38.757 17:06:31 spdkcli_iscsi -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:38.757 17:06:31 spdkcli_iscsi -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74151' 00:15:38.757 17:06:31 spdkcli_iscsi -- common/autotest_common.sh@969 -- # kill 74151 00:15:38.757 17:06:31 spdkcli_iscsi -- common/autotest_common.sh@974 -- # wait 74151 00:15:39.015 17:06:31 spdkcli_iscsi -- spdkcli/iscsi.sh@1 -- # cleanup 00:15:39.015 17:06:31 spdkcli_iscsi -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:15:39.015 17:06:31 spdkcli_iscsi -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:15:39.015 17:06:31 spdkcli_iscsi -- spdkcli/common.sh@16 -- # '[' -n 74151 ']' 00:15:39.015 17:06:31 spdkcli_iscsi -- spdkcli/common.sh@17 -- # killprocess 74151 00:15:39.015 17:06:31 spdkcli_iscsi -- common/autotest_common.sh@950 -- # '[' -z 74151 ']' 00:15:39.016 17:06:31 spdkcli_iscsi -- common/autotest_common.sh@954 -- # kill -0 74151 00:15:39.016 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74151) - No such process 00:15:39.016 Process with pid 74151 is not found 00:15:39.016 17:06:31 spdkcli_iscsi -- common/autotest_common.sh@977 -- # echo 'Process with pid 74151 is not found' 00:15:39.016 17:06:31 spdkcli_iscsi -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:15:39.016 17:06:31 spdkcli_iscsi -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_iscsi.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:15:39.016 ************************************ 00:15:39.016 END TEST spdkcli_iscsi 00:15:39.016 ************************************ 00:15:39.016 00:15:39.016 real 0m16.405s 00:15:39.016 user 0m35.066s 00:15:39.016 sys 0m1.216s 00:15:39.016 17:06:31 spdkcli_iscsi -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:39.016 17:06:31 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:15:39.016 17:06:31 -- spdk/autotest.sh@271 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:15:39.016 17:06:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:39.016 17:06:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:39.016 17:06:31 -- common/autotest_common.sh@10 -- # set +x 00:15:39.016 ************************************ 00:15:39.016 START TEST spdkcli_raid 00:15:39.016 ************************************ 00:15:39.016 17:06:31 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:15:39.274 * Looking for test storage... 00:15:39.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:15:39.274 17:06:31 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:15:39.274 17:06:31 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:39.274 17:06:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=74454 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:15:39.274 17:06:31 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 74454 00:15:39.274 17:06:31 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 74454 ']' 00:15:39.274 17:06:31 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.275 17:06:31 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:39.275 17:06:31 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.275 17:06:31 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:39.275 17:06:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:39.275 [2024-07-25 17:06:31.607471] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:39.275 [2024-07-25 17:06:31.607715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74454 ] 00:15:39.533 [2024-07-25 17:06:31.747334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:39.533 [2024-07-25 17:06:31.843981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.533 [2024-07-25 17:06:31.843984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.099 17:06:32 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:40.099 17:06:32 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:15:40.099 17:06:32 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:15:40.099 17:06:32 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:40.099 17:06:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:40.099 17:06:32 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:15:40.099 17:06:32 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:40.099 17:06:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:40.099 17:06:32 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:15:40.099 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:15:40.099 ' 00:15:41.473 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:15:41.473 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:15:41.731 17:06:34 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:15:41.731 17:06:34 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:41.731 17:06:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.731 17:06:34 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:15:41.731 17:06:34 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:41.731 17:06:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.731 17:06:34 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:15:41.731 ' 00:15:42.665 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:15:42.924 17:06:35 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:15:42.924 17:06:35 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:42.924 17:06:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:42.924 17:06:35 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:15:42.924 17:06:35 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:42.924 17:06:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:42.924 17:06:35 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:15:42.924 17:06:35 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:15:43.490 17:06:35 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:15:43.490 17:06:35 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:15:43.490 17:06:35 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:15:43.490 17:06:35 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:43.490 17:06:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:43.490 17:06:35 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:15:43.490 17:06:35 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:43.490 17:06:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:43.490 17:06:35 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:15:43.490 ' 00:15:44.425 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:15:44.684 17:06:36 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:15:44.684 17:06:36 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:44.684 17:06:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.684 17:06:36 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:15:44.684 17:06:36 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:44.684 17:06:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.684 17:06:36 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:15:44.684 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:15:44.684 ' 00:15:46.061 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:15:46.061 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:15:46.061 17:06:38 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:15:46.061 17:06:38 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:46.061 17:06:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:46.061 17:06:38 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 74454 00:15:46.061 17:06:38 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 74454 ']' 00:15:46.061 17:06:38 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 74454 00:15:46.061 17:06:38 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:15:46.061 17:06:38 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:46.061 17:06:38 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74454 00:15:46.061 killing process with pid 74454 00:15:46.061 17:06:38 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:46.061 17:06:38 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:46.061 17:06:38 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74454' 00:15:46.061 17:06:38 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 74454 00:15:46.061 17:06:38 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 74454 00:15:46.320 17:06:38 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:15:46.320 17:06:38 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 74454 ']' 00:15:46.320 17:06:38 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 74454 00:15:46.320 17:06:38 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 74454 ']' 00:15:46.320 Process with pid 74454 is not found 00:15:46.320 17:06:38 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 74454 00:15:46.320 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74454) - No such process 00:15:46.320 17:06:38 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 74454 is not found' 00:15:46.320 17:06:38 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:15:46.320 17:06:38 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:15:46.320 17:06:38 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:15:46.320 17:06:38 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:15:46.579 00:15:46.579 real 0m7.393s 00:15:46.579 user 0m15.707s 00:15:46.579 sys 0m1.004s 00:15:46.579 ************************************ 00:15:46.579 END TEST spdkcli_raid 00:15:46.579 ************************************ 00:15:46.579 17:06:38 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:46.579 17:06:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:46.579 17:06:38 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:15:46.579 17:06:38 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:15:46.579 17:06:38 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:15:46.579 17:06:38 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:15:46.579 17:06:38 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:46.579 17:06:38 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:15:46.579 17:06:38 -- spdk/autotest.sh@334 -- # '[' 1 -eq 1 ']' 00:15:46.579 17:06:38 -- spdk/autotest.sh@335 -- # run_test blockdev_rbd /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh rbd 00:15:46.579 17:06:38 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:46.579 17:06:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:46.579 17:06:38 -- common/autotest_common.sh@10 -- # set +x 00:15:46.579 ************************************ 00:15:46.579 START TEST blockdev_rbd 00:15:46.579 ************************************ 00:15:46.579 17:06:38 blockdev_rbd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh rbd 00:15:46.579 * Looking for test storage... 00:15:46.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:46.579 17:06:38 blockdev_rbd -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:46.579 17:06:38 blockdev_rbd -- bdev/nbd_common.sh@6 -- # set -e 00:15:46.579 17:06:38 blockdev_rbd -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:46.579 17:06:38 blockdev_rbd -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:46.579 17:06:38 blockdev_rbd -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:46.579 17:06:38 blockdev_rbd -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:46.579 17:06:38 blockdev_rbd -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:46.579 17:06:38 blockdev_rbd -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:46.579 17:06:38 blockdev_rbd -- bdev/blockdev.sh@20 -- # : 00:15:46.579 17:06:38 blockdev_rbd -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:46.579 17:06:38 blockdev_rbd -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:46.579 17:06:38 blockdev_rbd -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:46.579 17:06:38 blockdev_rbd -- bdev/blockdev.sh@673 -- # uname -s 00:15:46.579 17:06:39 blockdev_rbd -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:46.579 17:06:39 blockdev_rbd -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:46.579 17:06:39 blockdev_rbd -- bdev/blockdev.sh@681 -- # test_type=rbd 00:15:46.579 17:06:39 blockdev_rbd -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:46.579 17:06:39 blockdev_rbd -- bdev/blockdev.sh@683 -- # dek= 00:15:46.579 17:06:39 blockdev_rbd -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:46.579 17:06:39 blockdev_rbd -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:46.579 17:06:39 blockdev_rbd -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:46.579 17:06:39 blockdev_rbd -- bdev/blockdev.sh@689 -- # [[ rbd == bdev ]] 00:15:46.579 17:06:39 blockdev_rbd -- bdev/blockdev.sh@689 -- # [[ rbd == crypto_* ]] 00:15:46.579 17:06:39 blockdev_rbd -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:46.579 17:06:39 blockdev_rbd -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74703 00:15:46.579 17:06:39 blockdev_rbd -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:46.579 17:06:39 blockdev_rbd -- bdev/blockdev.sh@49 -- # waitforlisten 74703 00:15:46.579 17:06:39 blockdev_rbd -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:46.579 17:06:39 blockdev_rbd -- common/autotest_common.sh@831 -- # '[' -z 74703 ']' 00:15:46.579 17:06:39 blockdev_rbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.579 17:06:39 blockdev_rbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:46.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.579 17:06:39 blockdev_rbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.579 17:06:39 blockdev_rbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:46.579 17:06:39 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:15:46.838 [2024-07-25 17:06:39.077965] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:46.838 [2024-07-25 17:06:39.078301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74703 ] 00:15:46.838 [2024-07-25 17:06:39.218747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.096 [2024-07-25 17:06:39.317561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.663 17:06:39 blockdev_rbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:47.663 17:06:39 blockdev_rbd -- common/autotest_common.sh@864 -- # return 0 00:15:47.663 17:06:39 blockdev_rbd -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:47.663 17:06:39 blockdev_rbd -- bdev/blockdev.sh@719 -- # setup_rbd_conf 00:15:47.663 17:06:39 blockdev_rbd -- bdev/blockdev.sh@260 -- # timing_enter rbd_setup 00:15:47.663 17:06:39 blockdev_rbd -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:47.663 17:06:39 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:15:47.663 17:06:39 blockdev_rbd -- bdev/blockdev.sh@261 -- # rbd_setup 127.0.0.1 00:15:47.663 17:06:39 blockdev_rbd -- common/autotest_common.sh@1007 -- # '[' -z 127.0.0.1 ']' 00:15:47.663 17:06:39 blockdev_rbd -- common/autotest_common.sh@1011 -- # '[' -n '' ']' 00:15:47.663 17:06:39 blockdev_rbd -- common/autotest_common.sh@1020 -- # hash ceph 00:15:47.663 17:06:39 blockdev_rbd -- common/autotest_common.sh@1021 -- # export PG_NUM=128 00:15:47.663 17:06:39 blockdev_rbd -- common/autotest_common.sh@1021 -- # PG_NUM=128 00:15:47.663 17:06:39 blockdev_rbd -- common/autotest_common.sh@1022 -- # export RBD_POOL=rbd 00:15:47.663 17:06:39 blockdev_rbd -- common/autotest_common.sh@1022 -- # RBD_POOL=rbd 00:15:47.663 17:06:39 blockdev_rbd -- common/autotest_common.sh@1023 -- # export RBD_NAME=foo 00:15:47.663 17:06:39 blockdev_rbd -- common/autotest_common.sh@1023 -- # RBD_NAME=foo 00:15:47.663 17:06:39 blockdev_rbd -- common/autotest_common.sh@1024 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:15:47.663 + base_dir=/var/tmp/ceph 00:15:47.663 + image=/var/tmp/ceph/ceph_raw.img 00:15:47.663 + dev=/dev/loop200 00:15:47.663 + pkill -9 ceph 00:15:47.663 + sleep 3 00:15:50.972 + umount /dev/loop200p2 00:15:50.972 umount: /dev/loop200p2: no mount point specified. 00:15:50.972 + losetup -d /dev/loop200 00:15:50.972 losetup: /dev/loop200: detach failed: No such device or address 00:15:50.972 + rm -rf /var/tmp/ceph 00:15:50.972 17:06:42 blockdev_rbd -- common/autotest_common.sh@1025 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 127.0.0.1 00:15:50.972 + set -e 00:15:50.972 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:15:50.972 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:15:50.972 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:15:50.972 + base_dir=/var/tmp/ceph 00:15:50.972 + mon_ip=127.0.0.1 00:15:50.972 + mon_dir=/var/tmp/ceph/mon.a 00:15:50.972 + pid_dir=/var/tmp/ceph/pid 00:15:50.972 + ceph_conf=/var/tmp/ceph/ceph.conf 00:15:50.972 + mnt_dir=/var/tmp/ceph/mnt 00:15:50.972 + image=/var/tmp/ceph_raw.img 00:15:50.972 + dev=/dev/loop200 00:15:50.972 + modprobe loop 00:15:50.972 + umount /dev/loop200p2 00:15:50.972 umount: /dev/loop200p2: no mount point specified. 00:15:50.972 + true 00:15:50.972 + losetup -d /dev/loop200 00:15:50.972 losetup: /dev/loop200: detach failed: No such device or address 00:15:50.972 + true 00:15:50.972 + '[' -d /var/tmp/ceph ']' 00:15:50.972 + mkdir /var/tmp/ceph 00:15:50.972 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:15:50.972 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:15:50.972 + fallocate -l 4G /var/tmp/ceph_raw.img 00:15:50.972 + mknod /dev/loop200 b 7 200 00:15:50.972 mknod: /dev/loop200: File exists 00:15:50.972 + true 00:15:50.972 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:15:50.972 + PARTED='parted -s' 00:15:50.972 + SGDISK=sgdisk 00:15:50.972 + echo 'Partitioning /dev/loop200' 00:15:50.972 Partitioning /dev/loop200 00:15:50.972 + parted -s /dev/loop200 mktable gpt 00:15:50.972 + sleep 2 00:15:52.874 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:15:52.874 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:15:52.874 Setting name on /dev/loop200 00:15:52.874 + partno=0 00:15:52.874 + echo 'Setting name on /dev/loop200' 00:15:52.874 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:15:53.811 Warning: The kernel is still using the old partition table. 00:15:53.811 The new table will be used at the next reboot or after you 00:15:53.811 run partprobe(8) or kpartx(8) 00:15:53.811 The operation has completed successfully. 00:15:53.811 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:15:55.190 Warning: The kernel is still using the old partition table. 00:15:55.190 The new table will be used at the next reboot or after you 00:15:55.190 run partprobe(8) or kpartx(8) 00:15:55.190 The operation has completed successfully. 00:15:55.190 + kpartx /dev/loop200 00:15:55.190 loop200p1 : 0 4192256 /dev/loop200 2048 00:15:55.190 loop200p2 : 0 4192256 /dev/loop200 4194304 00:15:55.190 ++ ceph -v 00:15:55.190 ++ awk '{print $3}' 00:15:55.190 + ceph_version=17.2.7 00:15:55.190 + ceph_maj=17 00:15:55.190 + '[' 17 -gt 12 ']' 00:15:55.190 + update_config=true 00:15:55.190 + rm -f /var/log/ceph/ceph-mon.a.log 00:15:55.190 + set_min_mon_release='--set-min-mon-release 14' 00:15:55.190 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:15:55.190 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:15:55.190 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:15:55.190 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:15:55.190 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:15:55.190 = sectsz=512 attr=2, projid32bit=1 00:15:55.190 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:55.190 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:55.190 data = bsize=4096 blocks=524032, imaxpct=25 00:15:55.190 = sunit=0 swidth=0 blks 00:15:55.190 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:55.190 log =internal log bsize=4096 blocks=16384, version=2 00:15:55.190 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:55.190 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:55.190 Discarding blocks...Done. 00:15:55.190 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:15:55.190 + cat 00:15:55.190 + rm -rf '/var/tmp/ceph/mon.a/*' 00:15:55.190 + mkdir -p /var/tmp/ceph/mon.a 00:15:55.190 + mkdir -p /var/tmp/ceph/pid 00:15:55.190 + rm -f /etc/ceph/ceph.client.admin.keyring 00:15:55.190 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:15:55.190 creating /var/tmp/ceph/keyring 00:15:55.190 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:15:55.190 + monmaptool --create --clobber --add a 127.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:15:55.190 monmaptool: monmap file /var/tmp/ceph/monmap 00:15:55.190 monmaptool: generated fsid 2aa62054-2bfa-4d02-b5aa-5e97aa12a7d5 00:15:55.190 setting min_mon_release = octopus 00:15:55.190 epoch 0 00:15:55.190 fsid 2aa62054-2bfa-4d02-b5aa-5e97aa12a7d5 00:15:55.190 last_changed 2024-07-25T17:06:47.532164+0000 00:15:55.190 created 2024-07-25T17:06:47.532164+0000 00:15:55.190 min_mon_release 15 (octopus) 00:15:55.190 election_strategy: 1 00:15:55.190 0: v2:127.0.0.1:12046/0 mon.a 00:15:55.190 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:15:55.190 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:15:55.190 + '[' true = true ']' 00:15:55.190 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:15:55.190 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:15:55.190 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:15:55.190 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:15:55.190 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:15:55.190 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:15:55.190 ++ hostname 00:15:55.190 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:15:55.449 + true 00:15:55.449 + '[' true = true ']' 00:15:55.449 + ceph-conf --name mon.a --show-config-value log_file 00:15:55.449 /var/log/ceph/ceph-mon.a.log 00:15:55.449 ++ ceph -s 00:15:55.449 ++ grep id 00:15:55.449 ++ awk '{print $2}' 00:15:55.711 + fsid=2aa62054-2bfa-4d02-b5aa-5e97aa12a7d5 00:15:55.711 + sed -i 's/perf = true/perf = true\n\tfsid = 2aa62054-2bfa-4d02-b5aa-5e97aa12a7d5 \n/g' /var/tmp/ceph/ceph.conf 00:15:55.711 + (( ceph_maj < 18 )) 00:15:55.711 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:15:55.711 + cat /var/tmp/ceph/ceph.conf 00:15:55.711 [global] 00:15:55.711 debug_lockdep = 0/0 00:15:55.711 debug_context = 0/0 00:15:55.711 debug_crush = 0/0 00:15:55.711 debug_buffer = 0/0 00:15:55.711 debug_timer = 0/0 00:15:55.711 debug_filer = 0/0 00:15:55.711 debug_objecter = 0/0 00:15:55.711 debug_rados = 0/0 00:15:55.711 debug_rbd = 0/0 00:15:55.711 debug_ms = 0/0 00:15:55.711 debug_monc = 0/0 00:15:55.711 debug_tp = 0/0 00:15:55.711 debug_auth = 0/0 00:15:55.711 debug_finisher = 0/0 00:15:55.711 debug_heartbeatmap = 0/0 00:15:55.711 debug_perfcounter = 0/0 00:15:55.711 debug_asok = 0/0 00:15:55.711 debug_throttle = 0/0 00:15:55.711 debug_mon = 0/0 00:15:55.711 debug_paxos = 0/0 00:15:55.711 debug_rgw = 0/0 00:15:55.711 00:15:55.711 perf = true 00:15:55.711 osd objectstore = filestore 00:15:55.711 00:15:55.711 fsid = 2aa62054-2bfa-4d02-b5aa-5e97aa12a7d5 00:15:55.711 00:15:55.711 mutex_perf_counter = false 00:15:55.711 throttler_perf_counter = false 00:15:55.711 rbd cache = false 00:15:55.711 mon_allow_pool_delete = true 00:15:55.711 00:15:55.711 osd_pool_default_size = 1 00:15:55.711 00:15:55.711 [mon] 00:15:55.711 mon_max_pool_pg_num=166496 00:15:55.711 mon_osd_max_split_count = 10000 00:15:55.711 mon_pg_warn_max_per_osd = 10000 00:15:55.711 00:15:55.711 [osd] 00:15:55.711 osd_op_threads = 64 00:15:55.711 filestore_queue_max_ops=5000 00:15:55.711 filestore_queue_committing_max_ops=5000 00:15:55.711 journal_max_write_entries=1000 00:15:55.711 journal_queue_max_ops=3000 00:15:55.711 objecter_inflight_ops=102400 00:15:55.711 filestore_wbthrottle_enable=false 00:15:55.711 filestore_queue_max_bytes=1048576000 00:15:55.711 filestore_queue_committing_max_bytes=1048576000 00:15:55.711 journal_max_write_bytes=1048576000 00:15:55.711 journal_queue_max_bytes=1048576000 00:15:55.711 ms_dispatch_throttle_bytes=1048576000 00:15:55.711 objecter_inflight_op_bytes=1048576000 00:15:55.711 filestore_max_sync_interval=10 00:15:55.711 osd_client_message_size_cap = 0 00:15:55.711 osd_client_message_cap = 0 00:15:55.711 osd_enable_op_tracker = false 00:15:55.711 filestore_fd_cache_size = 10240 00:15:55.711 filestore_fd_cache_shards = 64 00:15:55.711 filestore_op_threads = 16 00:15:55.711 osd_op_num_shards = 48 00:15:55.711 osd_op_num_threads_per_shard = 2 00:15:55.711 osd_pg_object_context_cache_count = 10240 00:15:55.711 filestore_odsync_write = True 00:15:55.711 journal_dynamic_throttle = True 00:15:55.711 00:15:55.711 [osd.0] 00:15:55.711 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:15:55.711 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:15:55.711 00:15:55.711 # add mon address 00:15:55.711 [mon.a] 00:15:55.711 mon addr = v2:127.0.0.1:12046 00:15:55.711 + i=0 00:15:55.711 + mkdir -p /var/tmp/ceph/mnt 00:15:55.711 ++ uuidgen 00:15:55.711 + uuid=6d20f6b2-ea64-47d5-bb76-2fa59965069f 00:15:55.711 + ceph -c /var/tmp/ceph/ceph.conf osd create 6d20f6b2-ea64-47d5-bb76-2fa59965069f 0 00:15:55.970 0 00:15:55.970 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid 6d20f6b2-ea64-47d5-bb76-2fa59965069f --check-needs-journal --no-mon-config 00:15:55.970 2024-07-25T17:06:48.332+0000 7f36c7e59400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:15:55.971 2024-07-25T17:06:48.333+0000 7f36c7e59400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:15:55.971 2024-07-25T17:06:48.409+0000 7f36c7e59400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 6d20f6b2-ea64-47d5-bb76-2fa59965069f, invalid (someone else's?) journal 00:15:56.230 2024-07-25T17:06:48.461+0000 7f36c7e59400 -1 journal do_read_entry(4096): bad header magic 00:15:56.230 2024-07-25T17:06:48.461+0000 7f36c7e59400 -1 journal do_read_entry(4096): bad header magic 00:15:56.230 ++ hostname 00:15:56.230 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:15:57.607 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:15:57.607 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:15:57.866 added key for osd.0 00:15:57.866 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:15:58.125 + class_dir=/lib64/rados-classes 00:15:58.125 + [[ -e /lib64/rados-classes ]] 00:15:58.125 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:15:58.384 + pkill -9 ceph-osd 00:15:58.384 + true 00:15:58.384 + sleep 2 00:16:00.287 + mkdir -p /var/tmp/ceph/pid 00:16:00.287 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:16:00.287 2024-07-25T17:06:52.704+0000 7fde4de76400 -1 Falling back to public interface 00:16:00.544 2024-07-25T17:06:52.771+0000 7fde4de76400 -1 journal do_read_entry(8192): bad header magic 00:16:00.544 2024-07-25T17:06:52.771+0000 7fde4de76400 -1 journal do_read_entry(8192): bad header magic 00:16:00.544 2024-07-25T17:06:52.787+0000 7fde4de76400 -1 osd.0 0 log_to_monitors true 00:16:01.476 17:06:53 blockdev_rbd -- common/autotest_common.sh@1027 -- # ceph osd pool create rbd 128 00:16:02.410 pool 'rbd' created 00:16:02.410 17:06:54 blockdev_rbd -- common/autotest_common.sh@1028 -- # rbd create foo --size 1000 00:16:07.681 17:06:59 blockdev_rbd -- bdev/blockdev.sh@262 -- # timing_exit rbd_setup 00:16:07.681 17:06:59 blockdev_rbd -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:07.681 17:06:59 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:07.681 17:06:59 blockdev_rbd -- bdev/blockdev.sh@264 -- # rpc_cmd bdev_rbd_create -b Ceph0 rbd foo 512 00:16:07.681 17:06:59 blockdev_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.681 17:06:59 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:07.681 [2024-07-25 17:07:00.031955] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:16:07.681 WARNING:bdev_rbd_create should be used with specifying -c to have a cluster name after bdev_rbd_register_cluster. 00:16:07.681 Ceph0 00:16:07.681 17:07:00 blockdev_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.681 17:07:00 blockdev_rbd -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:07.681 17:07:00 blockdev_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.681 17:07:00 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:07.681 17:07:00 blockdev_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.681 17:07:00 blockdev_rbd -- bdev/blockdev.sh@739 -- # cat 00:16:07.681 17:07:00 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:07.681 17:07:00 blockdev_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.681 17:07:00 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:07.681 17:07:00 blockdev_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.681 17:07:00 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:07.681 17:07:00 blockdev_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.681 17:07:00 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:07.681 17:07:00 blockdev_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.681 17:07:00 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:07.681 17:07:00 blockdev_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.681 17:07:00 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:07.681 17:07:00 blockdev_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.681 17:07:00 blockdev_rbd -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:07.681 17:07:00 blockdev_rbd -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:07.681 17:07:00 blockdev_rbd -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:07.681 17:07:00 blockdev_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.681 17:07:00 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:07.940 17:07:00 blockdev_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.940 17:07:00 blockdev_rbd -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:07.940 17:07:00 blockdev_rbd -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "50936bb4-c72c-47b5-8998-70d2d8d2c849"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "50936bb4-c72c-47b5-8998-70d2d8d2c849",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:16:07.940 17:07:00 blockdev_rbd -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:07.940 17:07:00 blockdev_rbd -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:07.940 17:07:00 blockdev_rbd -- bdev/blockdev.sh@751 -- # hello_world_bdev=Ceph0 00:16:07.940 17:07:00 blockdev_rbd -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:07.940 17:07:00 blockdev_rbd -- bdev/blockdev.sh@753 -- # killprocess 74703 00:16:07.940 17:07:00 blockdev_rbd -- common/autotest_common.sh@950 -- # '[' -z 74703 ']' 00:16:07.940 17:07:00 blockdev_rbd -- common/autotest_common.sh@954 -- # kill -0 74703 00:16:07.940 17:07:00 blockdev_rbd -- common/autotest_common.sh@955 -- # uname 00:16:07.940 17:07:00 blockdev_rbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:07.940 17:07:00 blockdev_rbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74703 00:16:07.940 17:07:00 blockdev_rbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:07.940 17:07:00 blockdev_rbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:07.941 17:07:00 blockdev_rbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74703' 00:16:07.941 killing process with pid 74703 00:16:07.941 17:07:00 blockdev_rbd -- common/autotest_common.sh@969 -- # kill 74703 00:16:07.941 17:07:00 blockdev_rbd -- common/autotest_common.sh@974 -- # wait 74703 00:16:08.199 17:07:00 blockdev_rbd -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:08.199 17:07:00 blockdev_rbd -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Ceph0 '' 00:16:08.199 17:07:00 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:08.199 17:07:00 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:08.199 17:07:00 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:08.199 ************************************ 00:16:08.199 START TEST bdev_hello_world 00:16:08.199 ************************************ 00:16:08.199 17:07:00 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Ceph0 '' 00:16:08.458 [2024-07-25 17:07:00.698863] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:08.458 [2024-07-25 17:07:00.698940] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75588 ] 00:16:08.458 [2024-07-25 17:07:00.837860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.717 [2024-07-25 17:07:00.936378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.717 [2024-07-25 17:07:01.103291] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:16:08.717 [2024-07-25 17:07:01.116270] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:08.717 [2024-07-25 17:07:01.116348] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Ceph0 00:16:08.717 [2024-07-25 17:07:01.116382] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:08.717 [2024-07-25 17:07:01.119197] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:08.718 [2024-07-25 17:07:01.153996] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:08.718 [2024-07-25 17:07:01.154037] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:08.718 [2024-07-25 17:07:01.160215] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:08.718 00:16:08.718 [2024-07-25 17:07:01.160259] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:08.977 00:16:08.977 real 0m0.726s 00:16:08.977 user 0m0.444s 00:16:08.977 sys 0m0.156s 00:16:08.977 17:07:01 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:08.977 17:07:01 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:08.977 ************************************ 00:16:08.977 END TEST bdev_hello_world 00:16:08.977 ************************************ 00:16:08.977 17:07:01 blockdev_rbd -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:08.977 17:07:01 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:08.977 17:07:01 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:08.977 17:07:01 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:08.977 ************************************ 00:16:08.977 START TEST bdev_bounds 00:16:08.977 ************************************ 00:16:08.977 17:07:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:16:08.977 Process bdevio pid: 75632 00:16:08.977 17:07:01 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=75632 00:16:08.977 17:07:01 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:08.977 17:07:01 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:08.977 17:07:01 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 75632' 00:16:08.977 17:07:01 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 75632 00:16:08.977 17:07:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 75632 ']' 00:16:08.977 17:07:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.977 17:07:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.977 17:07:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.977 17:07:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.977 17:07:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:09.236 [2024-07-25 17:07:01.496697] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:09.236 [2024-07-25 17:07:01.496763] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75632 ] 00:16:09.236 [2024-07-25 17:07:01.623283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:09.495 [2024-07-25 17:07:01.718813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.495 [2024-07-25 17:07:01.719003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.495 [2024-07-25 17:07:01.719004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.495 [2024-07-25 17:07:01.881172] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:16:10.063 17:07:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:10.063 17:07:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:16:10.063 17:07:02 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:10.063 I/O targets: 00:16:10.063 Ceph0: 2048000 blocks of 512 bytes (1000 MiB) 00:16:10.063 00:16:10.063 00:16:10.063 CUnit - A unit testing framework for C - Version 2.1-3 00:16:10.063 http://cunit.sourceforge.net/ 00:16:10.063 00:16:10.063 00:16:10.063 Suite: bdevio tests on: Ceph0 00:16:10.063 Test: blockdev write read block ...passed 00:16:10.063 Test: blockdev write zeroes read block ...passed 00:16:10.063 Test: blockdev write zeroes read no split ...passed 00:16:10.064 Test: blockdev write zeroes read split ...passed 00:16:10.064 Test: blockdev write zeroes read split partial ...passed 00:16:10.064 Test: blockdev reset ...passed 00:16:10.064 Test: blockdev write read 8 blocks ...passed 00:16:10.064 Test: blockdev write read size > 128k ...passed 00:16:10.064 Test: blockdev write read invalid size ...passed 00:16:10.064 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:10.064 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:10.064 Test: blockdev write read max offset ...passed 00:16:10.064 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:10.064 Test: blockdev writev readv 8 blocks ...passed 00:16:10.064 Test: blockdev writev readv 30 x 1block ...passed 00:16:10.064 Test: blockdev writev readv block ...passed 00:16:10.064 Test: blockdev writev readv size > 128k ...passed 00:16:10.323 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:10.323 Test: blockdev comparev and writev ...passed 00:16:10.323 Test: blockdev nvme passthru rw ...passed 00:16:10.323 Test: blockdev nvme passthru vendor specific ...passed 00:16:10.323 Test: blockdev nvme admin passthru ...passed 00:16:10.323 Test: blockdev copy ...passed 00:16:10.323 00:16:10.323 Run Summary: Type Total Ran Passed Failed Inactive 00:16:10.323 suites 1 1 n/a 0 0 00:16:10.323 tests 23 23 23 0 0 00:16:10.323 asserts 130 130 130 0 n/a 00:16:10.323 00:16:10.323 Elapsed time = 0.365 seconds 00:16:10.323 0 00:16:10.323 17:07:02 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 75632 00:16:10.323 17:07:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 75632 ']' 00:16:10.323 17:07:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 75632 00:16:10.323 17:07:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:16:10.323 17:07:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:10.323 17:07:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75632 00:16:10.323 17:07:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:10.323 17:07:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:10.323 killing process with pid 75632 00:16:10.323 17:07:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75632' 00:16:10.323 17:07:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@969 -- # kill 75632 00:16:10.323 17:07:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@974 -- # wait 75632 00:16:10.582 17:07:02 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:10.583 00:16:10.583 real 0m1.390s 00:16:10.583 user 0m3.364s 00:16:10.583 sys 0m0.268s 00:16:10.583 17:07:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:10.583 17:07:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:10.583 ************************************ 00:16:10.583 END TEST bdev_bounds 00:16:10.583 ************************************ 00:16:10.583 17:07:02 blockdev_rbd -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Ceph0 '' 00:16:10.583 17:07:02 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:10.583 17:07:02 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:10.583 17:07:02 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:10.583 ************************************ 00:16:10.583 START TEST bdev_nbd 00:16:10.583 ************************************ 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Ceph0 '' 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Ceph0') 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Ceph0') 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=75699 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 75699 /var/tmp/spdk-nbd.sock 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 75699 ']' 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:10.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:10.583 17:07:02 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:10.583 [2024-07-25 17:07:02.967474] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:10.583 [2024-07-25 17:07:02.967552] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.842 [2024-07-25 17:07:03.108790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.842 [2024-07-25 17:07:03.207719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.101 [2024-07-25 17:07:03.379188] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:16:11.360 17:07:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:11.360 17:07:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:16:11.360 17:07:03 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Ceph0 00:16:11.360 17:07:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:11.360 17:07:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Ceph0') 00:16:11.360 17:07:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:11.360 17:07:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Ceph0 00:16:11.360 17:07:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:11.360 17:07:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Ceph0') 00:16:11.360 17:07:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:11.360 17:07:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:11.360 17:07:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:11.360 17:07:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:11.360 17:07:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:11.360 17:07:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Ceph0 00:16:11.619 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:11.620 1+0 records in 00:16:11.620 1+0 records out 00:16:11.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000902153 s, 4.5 MB/s 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:11.620 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:11.879 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:11.879 { 00:16:11.879 "nbd_device": "/dev/nbd0", 00:16:11.879 "bdev_name": "Ceph0" 00:16:11.879 } 00:16:11.879 ]' 00:16:11.879 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:11.879 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:11.879 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:11.879 { 00:16:11.879 "nbd_device": "/dev/nbd0", 00:16:11.879 "bdev_name": "Ceph0" 00:16:11.879 } 00:16:11.879 ]' 00:16:11.879 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:11.879 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:11.879 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:11.879 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:11.879 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:11.879 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:11.879 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:12.138 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:12.138 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:12.138 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:12.138 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.138 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.138 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:12.138 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:12.138 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.138 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:12.138 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:12.138 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Ceph0 /dev/nbd0 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Ceph0') 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Ceph0 /dev/nbd0 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Ceph0') 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:12.398 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Ceph0 /dev/nbd0 00:16:12.657 /dev/nbd0 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:12.657 1+0 records in 00:16:12.657 1+0 records out 00:16:12.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654015 s, 6.3 MB/s 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:12.657 17:07:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:12.917 { 00:16:12.917 "nbd_device": "/dev/nbd0", 00:16:12.917 "bdev_name": "Ceph0" 00:16:12.917 } 00:16:12.917 ]' 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:12.917 { 00:16:12.917 "nbd_device": "/dev/nbd0", 00:16:12.917 "bdev_name": "Ceph0" 00:16:12.917 } 00:16:12.917 ]' 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:12.917 256+0 records in 00:16:12.917 256+0 records out 00:16:12.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133718 s, 78.4 MB/s 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:12.917 17:07:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:14.375 256+0 records in 00:16:14.375 256+0 records out 00:16:14.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 1.52726 s, 687 kB/s 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:14.375 17:07:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:14.634 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:14.635 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:14.635 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:14.635 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:14.635 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:14.635 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:14.635 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:14.635 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:14.635 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:14.635 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:14.635 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:16:14.894 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:15.153 malloc_lvol_verify 00:16:15.153 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:15.412 53cb5b6b-bab0-4c08-962e-84aef12c3dbe 00:16:15.412 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:15.412 4a777231-956f-44f9-9dd1-fa971f2de9c3 00:16:15.412 17:07:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:15.672 /dev/nbd0 00:16:15.672 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:16:15.672 mke2fs 1.46.5 (30-Dec-2021) 00:16:15.672 Discarding device blocks: 0/4096 done 00:16:15.672 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:15.672 00:16:15.672 Allocating group tables: 0/1 done 00:16:15.672 Writing inode tables: 0/1 done 00:16:15.672 Creating journal (1024 blocks): done 00:16:15.672 Writing superblocks and filesystem accounting information: 0/1 done 00:16:15.672 00:16:15.672 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:16:15.672 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:15.672 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:15.672 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:15.672 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:15.672 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:15.672 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.672 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 75699 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 75699 ']' 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 75699 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75699 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:15.932 killing process with pid 75699 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75699' 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@969 -- # kill 75699 00:16:15.932 17:07:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@974 -- # wait 75699 00:16:16.192 17:07:08 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:16.192 00:16:16.192 real 0m5.559s 00:16:16.192 user 0m6.783s 00:16:16.192 sys 0m1.922s 00:16:16.192 17:07:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:16.192 17:07:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:16.192 ************************************ 00:16:16.192 END TEST bdev_nbd 00:16:16.192 ************************************ 00:16:16.192 17:07:08 blockdev_rbd -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:16.192 17:07:08 blockdev_rbd -- bdev/blockdev.sh@763 -- # '[' rbd = nvme ']' 00:16:16.192 17:07:08 blockdev_rbd -- bdev/blockdev.sh@763 -- # '[' rbd = gpt ']' 00:16:16.192 17:07:08 blockdev_rbd -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:16.192 17:07:08 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:16.192 17:07:08 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:16.192 17:07:08 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:16.192 ************************************ 00:16:16.192 START TEST bdev_fio 00:16:16.192 ************************************ 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:16.192 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:16:16.192 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Ceph0]' 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Ceph0 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:16.193 ************************************ 00:16:16.193 START TEST bdev_fio_rw_verify 00:16:16.193 ************************************ 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:16:16.193 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:16.452 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:16.452 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:16.452 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:16.452 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:16.452 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:16.452 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:16.452 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:16.452 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:16.452 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:16.452 17:07:08 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:16.452 job_Ceph0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:16.452 fio-3.35 00:16:16.452 Starting 1 thread 00:16:28.657 00:16:28.657 job_Ceph0: (groupid=0, jobs=1): err= 0: pid=75941: Thu Jul 25 17:07:19 2024 00:16:28.657 read: IOPS=399, BW=1596KiB/s (1634kB/s)(16.0MiB/10290msec) 00:16:28.657 slat (usec): min=2, max=311, avg=10.45, stdev=19.73 00:16:28.657 clat (usec): min=135, max=568681, avg=5519.36, stdev=43780.04 00:16:28.657 lat (usec): min=141, max=568685, avg=5529.80, stdev=43779.55 00:16:28.657 clat percentiles (usec): 00:16:28.657 | 50.000th=[ 449], 99.000th=[170918], 99.900th=[566232], 00:16:28.657 | 99.990th=[566232], 99.999th=[566232] 00:16:28.657 write: IOPS=498, BW=1993KiB/s (2041kB/s)(20.0MiB/10290msec); 0 zone resets 00:16:28.657 slat (usec): min=11, max=740, avg=17.40, stdev=22.55 00:16:28.657 clat (usec): min=1964, max=171816, avg=11590.32, stdev=22365.93 00:16:28.657 lat (usec): min=1982, max=171832, avg=11607.72, stdev=22366.89 00:16:28.657 clat percentiles (msec): 00:16:28.657 | 50.000th=[ 7], 99.000th=[ 124], 99.900th=[ 157], 99.990th=[ 171], 00:16:28.657 | 99.999th=[ 171] 00:16:28.657 bw ( KiB/s): min= 360, max= 4870, per=100.00%, avg=2412.29, stdev=1647.54, samples=17 00:16:28.657 iops : min= 90, max= 1217, avg=603.00, stdev=411.85, samples=17 00:16:28.657 lat (usec) : 250=3.05%, 500=22.67%, 750=9.70%, 1000=3.52% 00:16:28.657 lat (msec) : 2=3.64%, 4=3.12%, 10=48.39%, 20=2.05%, 50=0.41% 00:16:28.657 lat (msec) : 100=1.05%, 250=2.08%, 500=0.17%, 750=0.14% 00:16:28.657 cpu : usr=99.05%, sys=0.19%, ctx=792, majf=0, minf=51 00:16:28.657 IO depths : 1=0.1%, 2=0.1%, 4=15.0%, 8=84.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.657 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.657 issued rwts: total=4106,5127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.657 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:28.657 00:16:28.657 Run status group 0 (all jobs): 00:16:28.657 READ: bw=1596KiB/s (1634kB/s), 1596KiB/s-1596KiB/s (1634kB/s-1634kB/s), io=16.0MiB (16.8MB), run=10290-10290msec 00:16:28.657 WRITE: bw=1993KiB/s (2041kB/s), 1993KiB/s-1993KiB/s (2041kB/s-2041kB/s), io=20.0MiB (21.0MB), run=10290-10290msec 00:16:28.657 00:16:28.657 real 0m11.201s 00:16:28.657 user 0m11.357s 00:16:28.657 sys 0m0.657s 00:16:28.657 17:07:19 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.657 ************************************ 00:16:28.657 END TEST bdev_fio_rw_verify 00:16:28.657 ************************************ 00:16:28.657 17:07:19 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:28.657 17:07:19 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:28.657 17:07:19 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "50936bb4-c72c-47b5-8998-70d2d8d2c849"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "50936bb4-c72c-47b5-8998-70d2d8d2c849",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Ceph0 ]] 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "50936bb4-c72c-47b5-8998-70d2d8d2c849"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "50936bb4-c72c-47b5-8998-70d2d8d2c849",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Ceph0]' 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Ceph0 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:28.658 ************************************ 00:16:28.658 START TEST bdev_fio_trim 00:16:28.658 ************************************ 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:16:28.658 17:07:19 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:28.658 17:07:20 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:28.658 17:07:20 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:28.658 17:07:20 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:28.658 17:07:20 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:28.658 17:07:20 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:28.658 17:07:20 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:28.658 17:07:20 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:28.658 17:07:20 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:28.658 17:07:20 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:28.658 17:07:20 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:28.658 job_Ceph0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:28.658 fio-3.35 00:16:28.658 Starting 1 thread 00:16:38.636 00:16:38.636 job_Ceph0: (groupid=0, jobs=1): err= 0: pid=76130: Thu Jul 25 17:07:30 2024 00:16:38.636 write: IOPS=887, BW=3551KiB/s (3636kB/s)(34.7MiB/10005msec); 0 zone resets 00:16:38.636 slat (usec): min=3, max=739, avg=12.57, stdev=31.63 00:16:38.636 clat (usec): min=1991, max=31315, avg=8894.10, stdev=3515.58 00:16:38.636 lat (usec): min=1995, max=31320, avg=8906.68, stdev=3516.04 00:16:38.636 clat percentiles (usec): 00:16:38.636 | 50.000th=[ 8586], 99.000th=[18482], 99.900th=[30540], 99.990th=[31327], 00:16:38.636 | 99.999th=[31327] 00:16:38.636 bw ( KiB/s): min= 2589, max= 4184, per=99.33%, avg=3527.84, stdev=467.26, samples=19 00:16:38.636 iops : min= 647, max= 1046, avg=881.95, stdev=116.84, samples=19 00:16:38.636 trim: IOPS=887, BW=3551KiB/s (3636kB/s)(34.7MiB/10005msec); 0 zone resets 00:16:38.636 slat (usec): min=2, max=503, avg= 6.84, stdev=15.97 00:16:38.636 clat (usec): min=2, max=10377, avg=95.61, stdev=232.62 00:16:38.636 lat (usec): min=9, max=10464, avg=102.45, stdev=233.09 00:16:38.636 clat percentiles (usec): 00:16:38.636 | 50.000th=[ 68], 99.000th=[ 334], 99.900th=[ 930], 99.990th=[10421], 00:16:38.636 | 99.999th=[10421] 00:16:38.636 bw ( KiB/s): min= 2589, max= 4184, per=99.45%, avg=3531.21, stdev=466.41, samples=19 00:16:38.636 iops : min= 647, max= 1046, avg=882.79, stdev=116.63, samples=19 00:16:38.636 lat (usec) : 4=0.27%, 10=0.81%, 20=6.29%, 50=13.70%, 100=10.67% 00:16:38.636 lat (usec) : 250=16.45%, 500=1.55%, 750=0.17%, 1000=0.05% 00:16:38.636 lat (msec) : 2=0.01%, 4=2.53%, 10=30.71%, 20=16.39%, 50=0.39% 00:16:38.636 cpu : usr=98.54%, sys=0.27%, ctx=2050, majf=0, minf=26 00:16:38.636 IO depths : 1=0.1%, 2=0.1%, 4=5.8%, 8=94.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:38.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.636 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.636 issued rwts: total=0,8881,8881,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.636 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:38.636 00:16:38.636 Run status group 0 (all jobs): 00:16:38.636 WRITE: bw=3551KiB/s (3636kB/s), 3551KiB/s-3551KiB/s (3636kB/s-3636kB/s), io=34.7MiB (36.4MB), run=10005-10005msec 00:16:38.636 TRIM: bw=3551KiB/s (3636kB/s), 3551KiB/s-3551KiB/s (3636kB/s-3636kB/s), io=34.7MiB (36.4MB), run=10005-10005msec 00:16:38.636 00:16:38.636 real 0m10.939s 00:16:38.636 user 0m11.096s 00:16:38.636 sys 0m0.723s 00:16:38.636 17:07:30 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.636 ************************************ 00:16:38.636 END TEST bdev_fio_trim 00:16:38.636 ************************************ 00:16:38.636 17:07:30 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:16:38.636 17:07:30 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:16:38.636 17:07:30 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:38.636 /home/vagrant/spdk_repo/spdk 00:16:38.636 17:07:30 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:16:38.636 17:07:30 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:16:38.636 00:16:38.636 real 0m22.475s 00:16:38.636 user 0m22.621s 00:16:38.636 sys 0m1.539s 00:16:38.636 17:07:30 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.636 17:07:30 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:38.636 ************************************ 00:16:38.636 END TEST bdev_fio 00:16:38.636 ************************************ 00:16:38.636 17:07:31 blockdev_rbd -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:38.636 17:07:31 blockdev_rbd -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:38.636 17:07:31 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:16:38.636 17:07:31 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.636 17:07:31 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:38.636 ************************************ 00:16:38.636 START TEST bdev_verify 00:16:38.636 ************************************ 00:16:38.636 17:07:31 blockdev_rbd.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:38.918 [2024-07-25 17:07:31.130300] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:38.919 [2024-07-25 17:07:31.130379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76270 ] 00:16:38.919 [2024-07-25 17:07:31.264280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:38.919 [2024-07-25 17:07:31.367070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.919 [2024-07-25 17:07:31.367073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.033 [2024-07-25 17:07:39.265372] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:16:47.033 Running I/O for 5 seconds... 00:16:52.302 00:16:52.302 Latency(us) 00:16:52.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.302 Job: Ceph0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:52.302 Verification LBA range: start 0x0 length 0x1f400 00:16:52.302 Ceph0 : 5.03 1685.49 6.58 0.00 0.00 75717.28 3711.07 1421683.77 00:16:52.302 Job: Ceph0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:52.302 Verification LBA range: start 0x1f400 length 0x1f400 00:16:52.302 Ceph0 : 5.03 1489.39 5.82 0.00 0.00 85715.88 5263.94 1199335.12 00:16:52.302 =================================================================================================================== 00:16:52.302 Total : 3174.88 12.40 0.00 0.00 80411.38 3711.07 1421683.77 00:16:52.302 00:16:52.302 real 0m13.456s 00:16:52.302 user 0m19.749s 00:16:52.302 sys 0m0.706s 00:16:52.302 17:07:44 blockdev_rbd.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.302 17:07:44 blockdev_rbd.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:52.302 ************************************ 00:16:52.302 END TEST bdev_verify 00:16:52.302 ************************************ 00:16:52.302 17:07:44 blockdev_rbd -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:52.302 17:07:44 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:16:52.302 17:07:44 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.302 17:07:44 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:52.302 ************************************ 00:16:52.302 START TEST bdev_verify_big_io 00:16:52.302 ************************************ 00:16:52.302 17:07:44 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:52.302 [2024-07-25 17:07:44.661452] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:52.302 [2024-07-25 17:07:44.661524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76449 ] 00:16:52.561 [2024-07-25 17:07:44.801001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:52.561 [2024-07-25 17:07:44.898049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.561 [2024-07-25 17:07:44.898050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.819 [2024-07-25 17:07:45.062912] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:16:52.819 Running I/O for 5 seconds... 00:16:58.085 00:16:58.085 Latency(us) 00:16:58.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.085 Job: Ceph0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:58.085 Verification LBA range: start 0x0 length 0x1f40 00:16:58.085 Ceph0 : 5.16 511.26 31.95 0.00 0.00 244129.09 3013.60 421114.86 00:16:58.085 Job: Ceph0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:58.085 Verification LBA range: start 0x1f40 length 0x1f40 00:16:58.085 Ceph0 : 5.16 534.18 33.39 0.00 0.00 234547.75 3737.39 512075.67 00:16:58.085 =================================================================================================================== 00:16:58.085 Total : 1045.44 65.34 0.00 0.00 239231.80 3013.60 512075.67 00:16:58.086 00:16:58.086 real 0m5.843s 00:16:58.086 user 0m11.462s 00:16:58.086 sys 0m0.686s 00:16:58.086 17:07:50 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:58.086 17:07:50 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.086 ************************************ 00:16:58.086 END TEST bdev_verify_big_io 00:16:58.086 ************************************ 00:16:58.086 17:07:50 blockdev_rbd -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:58.086 17:07:50 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:58.086 17:07:50 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:58.086 17:07:50 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:58.086 ************************************ 00:16:58.086 START TEST bdev_write_zeroes 00:16:58.086 ************************************ 00:16:58.086 17:07:50 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:58.344 [2024-07-25 17:07:50.578485] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:58.344 [2024-07-25 17:07:50.578566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76549 ] 00:16:58.344 [2024-07-25 17:07:50.719476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.602 [2024-07-25 17:07:50.816400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.602 [2024-07-25 17:07:50.980755] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:16:58.602 Running I/O for 1 seconds... 00:16:59.980 00:16:59.980 Latency(us) 00:16:59.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.980 Job: Ceph0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:59.980 Ceph0 : 1.17 4425.95 17.29 0.00 0.00 28855.78 5606.09 673783.78 00:16:59.980 =================================================================================================================== 00:16:59.980 Total : 4425.95 17.29 0.00 0.00 28855.78 5606.09 673783.78 00:16:59.980 00:16:59.980 real 0m1.850s 00:16:59.980 user 0m1.811s 00:16:59.980 sys 0m0.206s 00:16:59.980 17:07:52 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:59.980 17:07:52 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:59.980 ************************************ 00:16:59.980 END TEST bdev_write_zeroes 00:16:59.980 ************************************ 00:16:59.980 17:07:52 blockdev_rbd -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:59.980 17:07:52 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:59.980 17:07:52 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:59.980 17:07:52 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:16:59.980 ************************************ 00:16:59.980 START TEST bdev_json_nonenclosed 00:16:59.980 ************************************ 00:16:59.980 17:07:52 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:00.239 [2024-07-25 17:07:52.502651] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:00.239 [2024-07-25 17:07:52.502760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76609 ] 00:17:00.239 [2024-07-25 17:07:52.638855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.498 [2024-07-25 17:07:52.735433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.498 [2024-07-25 17:07:52.735506] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:00.498 [2024-07-25 17:07:52.735518] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:00.498 [2024-07-25 17:07:52.735527] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:00.498 00:17:00.498 real 0m0.385s 00:17:00.498 user 0m0.204s 00:17:00.498 sys 0m0.078s 00:17:00.498 17:07:52 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.498 17:07:52 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:00.498 ************************************ 00:17:00.498 END TEST bdev_json_nonenclosed 00:17:00.498 ************************************ 00:17:00.498 17:07:52 blockdev_rbd -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:00.498 17:07:52 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:00.498 17:07:52 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.498 17:07:52 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:17:00.498 ************************************ 00:17:00.498 START TEST bdev_json_nonarray 00:17:00.498 ************************************ 00:17:00.498 17:07:52 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:00.498 [2024-07-25 17:07:52.962820] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:00.499 [2024-07-25 17:07:52.963389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76636 ] 00:17:00.757 [2024-07-25 17:07:53.105575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.757 [2024-07-25 17:07:53.203607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.757 [2024-07-25 17:07:53.203675] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:00.757 [2024-07-25 17:07:53.203688] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:00.757 [2024-07-25 17:07:53.203696] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:01.016 00:17:01.016 real 0m0.407s 00:17:01.016 user 0m0.232s 00:17:01.016 sys 0m0.071s 00:17:01.016 17:07:53 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.016 17:07:53 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:01.016 ************************************ 00:17:01.016 END TEST bdev_json_nonarray 00:17:01.016 ************************************ 00:17:01.016 17:07:53 blockdev_rbd -- bdev/blockdev.sh@786 -- # [[ rbd == bdev ]] 00:17:01.016 17:07:53 blockdev_rbd -- bdev/blockdev.sh@793 -- # [[ rbd == gpt ]] 00:17:01.016 17:07:53 blockdev_rbd -- bdev/blockdev.sh@797 -- # [[ rbd == crypto_sw ]] 00:17:01.016 17:07:53 blockdev_rbd -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:01.016 17:07:53 blockdev_rbd -- bdev/blockdev.sh@810 -- # cleanup 00:17:01.016 17:07:53 blockdev_rbd -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:01.016 17:07:53 blockdev_rbd -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:01.016 17:07:53 blockdev_rbd -- bdev/blockdev.sh@26 -- # [[ rbd == rbd ]] 00:17:01.016 17:07:53 blockdev_rbd -- bdev/blockdev.sh@27 -- # rbd_cleanup 00:17:01.016 17:07:53 blockdev_rbd -- common/autotest_common.sh@1033 -- # hash ceph 00:17:01.016 17:07:53 blockdev_rbd -- common/autotest_common.sh@1034 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:17:01.016 + base_dir=/var/tmp/ceph 00:17:01.016 + image=/var/tmp/ceph/ceph_raw.img 00:17:01.016 + dev=/dev/loop200 00:17:01.016 + pkill -9 ceph 00:17:01.016 + sleep 3 00:17:04.330 + umount /dev/loop200p2 00:17:04.330 + losetup -d /dev/loop200 00:17:04.330 + rm -rf /var/tmp/ceph 00:17:04.330 17:07:56 blockdev_rbd -- common/autotest_common.sh@1035 -- # rm -f /var/tmp/ceph_raw.img 00:17:04.330 17:07:56 blockdev_rbd -- bdev/blockdev.sh@30 -- # [[ rbd == daos ]] 00:17:04.330 17:07:56 blockdev_rbd -- bdev/blockdev.sh@34 -- # [[ rbd = \g\p\t ]] 00:17:04.330 17:07:56 blockdev_rbd -- bdev/blockdev.sh@40 -- # [[ rbd == xnvme ]] 00:17:04.330 00:17:04.330 real 1m17.916s 00:17:04.330 user 1m30.347s 00:17:04.330 sys 0m7.700s 00:17:04.330 17:07:56 blockdev_rbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:04.330 17:07:56 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:17:04.330 ************************************ 00:17:04.330 END TEST blockdev_rbd 00:17:04.330 ************************************ 00:17:04.590 17:07:56 -- spdk/autotest.sh@336 -- # run_test spdkcli_rbd /home/vagrant/spdk_repo/spdk/test/spdkcli/rbd.sh 00:17:04.590 17:07:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:04.590 17:07:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:04.590 17:07:56 -- common/autotest_common.sh@10 -- # set +x 00:17:04.590 ************************************ 00:17:04.590 START TEST spdkcli_rbd 00:17:04.590 ************************************ 00:17:04.590 17:07:56 spdkcli_rbd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/rbd.sh 00:17:04.590 * Looking for test storage... 00:17:04.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:04.590 17:07:56 spdkcli_rbd -- spdkcli/rbd.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:04.590 17:07:56 spdkcli_rbd -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:04.590 17:07:56 spdkcli_rbd -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:04.590 17:07:56 spdkcli_rbd -- spdkcli/rbd.sh@11 -- # MATCH_FILE=spdkcli_rbd.test 00:17:04.590 17:07:56 spdkcli_rbd -- spdkcli/rbd.sh@12 -- # SPDKCLI_BRANCH=/bdevs/rbd 00:17:04.590 17:07:56 spdkcli_rbd -- spdkcli/rbd.sh@14 -- # trap 'rbd_cleanup; cleanup' EXIT 00:17:04.590 17:07:56 spdkcli_rbd -- spdkcli/rbd.sh@15 -- # timing_enter run_spdk_tgt 00:17:04.590 17:07:56 spdkcli_rbd -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:04.590 17:07:56 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:17:04.590 17:07:56 spdkcli_rbd -- spdkcli/rbd.sh@16 -- # run_spdk_tgt 00:17:04.590 17:07:56 spdkcli_rbd -- spdkcli/common.sh@27 -- # spdk_tgt_pid=76748 00:17:04.590 17:07:56 spdkcli_rbd -- spdkcli/common.sh@28 -- # waitforlisten 76748 00:17:04.590 17:07:56 spdkcli_rbd -- common/autotest_common.sh@831 -- # '[' -z 76748 ']' 00:17:04.590 17:07:56 spdkcli_rbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.590 17:07:56 spdkcli_rbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:04.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.590 17:07:56 spdkcli_rbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.590 17:07:56 spdkcli_rbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:04.590 17:07:56 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:17:04.590 17:07:56 spdkcli_rbd -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:04.590 [2024-07-25 17:07:57.044250] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:04.590 [2024-07-25 17:07:57.045162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76748 ] 00:17:04.849 [2024-07-25 17:07:57.184803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:04.849 [2024-07-25 17:07:57.280283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.849 [2024-07-25 17:07:57.280284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.419 17:07:57 spdkcli_rbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:05.419 17:07:57 spdkcli_rbd -- common/autotest_common.sh@864 -- # return 0 00:17:05.419 17:07:57 spdkcli_rbd -- spdkcli/rbd.sh@17 -- # timing_exit run_spdk_tgt 00:17:05.419 17:07:57 spdkcli_rbd -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:05.419 17:07:57 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:17:05.680 17:07:57 spdkcli_rbd -- spdkcli/rbd.sh@19 -- # timing_enter spdkcli_create_rbd_config 00:17:05.680 17:07:57 spdkcli_rbd -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:05.680 17:07:57 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:17:05.680 17:07:57 spdkcli_rbd -- spdkcli/rbd.sh@20 -- # rbd_cleanup 00:17:05.680 17:07:57 spdkcli_rbd -- common/autotest_common.sh@1033 -- # hash ceph 00:17:05.680 17:07:57 spdkcli_rbd -- common/autotest_common.sh@1034 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:17:05.680 + base_dir=/var/tmp/ceph 00:17:05.680 + image=/var/tmp/ceph/ceph_raw.img 00:17:05.680 + dev=/dev/loop200 00:17:05.680 + pkill -9 ceph 00:17:05.680 + sleep 3 00:17:08.968 + umount /dev/loop200p2 00:17:08.968 umount: /dev/loop200p2: no mount point specified. 00:17:08.968 + losetup -d /dev/loop200 00:17:08.968 losetup: /dev/loop200: detach failed: No such device or address 00:17:08.968 + rm -rf /var/tmp/ceph 00:17:08.968 17:08:00 spdkcli_rbd -- common/autotest_common.sh@1035 -- # rm -f /var/tmp/ceph_raw.img 00:17:08.968 17:08:00 spdkcli_rbd -- spdkcli/rbd.sh@21 -- # rbd_setup 127.0.0.1 00:17:08.968 17:08:00 spdkcli_rbd -- common/autotest_common.sh@1007 -- # '[' -z 127.0.0.1 ']' 00:17:08.968 17:08:00 spdkcli_rbd -- common/autotest_common.sh@1011 -- # '[' -n '' ']' 00:17:08.968 17:08:00 spdkcli_rbd -- common/autotest_common.sh@1020 -- # hash ceph 00:17:08.968 17:08:00 spdkcli_rbd -- common/autotest_common.sh@1021 -- # export PG_NUM=128 00:17:08.968 17:08:00 spdkcli_rbd -- common/autotest_common.sh@1021 -- # PG_NUM=128 00:17:08.968 17:08:00 spdkcli_rbd -- common/autotest_common.sh@1022 -- # export RBD_POOL=rbd 00:17:08.968 17:08:00 spdkcli_rbd -- common/autotest_common.sh@1022 -- # RBD_POOL=rbd 00:17:08.968 17:08:00 spdkcli_rbd -- common/autotest_common.sh@1023 -- # export RBD_NAME=foo 00:17:08.968 17:08:00 spdkcli_rbd -- common/autotest_common.sh@1023 -- # RBD_NAME=foo 00:17:08.968 17:08:00 spdkcli_rbd -- common/autotest_common.sh@1024 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:17:08.968 + base_dir=/var/tmp/ceph 00:17:08.968 + image=/var/tmp/ceph/ceph_raw.img 00:17:08.968 + dev=/dev/loop200 00:17:08.968 + pkill -9 ceph 00:17:08.968 + sleep 3 00:17:12.289 + umount /dev/loop200p2 00:17:12.289 umount: /dev/loop200p2: no mount point specified. 00:17:12.289 + losetup -d /dev/loop200 00:17:12.289 losetup: /dev/loop200: detach failed: No such device or address 00:17:12.289 + rm -rf /var/tmp/ceph 00:17:12.289 17:08:04 spdkcli_rbd -- common/autotest_common.sh@1025 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 127.0.0.1 00:17:12.289 + set -e 00:17:12.289 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:17:12.289 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:17:12.289 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:17:12.289 + base_dir=/var/tmp/ceph 00:17:12.289 + mon_ip=127.0.0.1 00:17:12.289 + mon_dir=/var/tmp/ceph/mon.a 00:17:12.289 + pid_dir=/var/tmp/ceph/pid 00:17:12.289 + ceph_conf=/var/tmp/ceph/ceph.conf 00:17:12.289 + mnt_dir=/var/tmp/ceph/mnt 00:17:12.289 + image=/var/tmp/ceph_raw.img 00:17:12.289 + dev=/dev/loop200 00:17:12.289 + modprobe loop 00:17:12.289 + umount /dev/loop200p2 00:17:12.289 umount: /dev/loop200p2: no mount point specified. 00:17:12.289 + true 00:17:12.289 + losetup -d /dev/loop200 00:17:12.289 losetup: /dev/loop200: detach failed: No such device or address 00:17:12.289 + true 00:17:12.289 + '[' -d /var/tmp/ceph ']' 00:17:12.289 + mkdir /var/tmp/ceph 00:17:12.289 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:17:12.289 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:17:12.289 + fallocate -l 4G /var/tmp/ceph_raw.img 00:17:12.289 + mknod /dev/loop200 b 7 200 00:17:12.289 mknod: /dev/loop200: File exists 00:17:12.289 + true 00:17:12.289 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:17:12.289 Partitioning /dev/loop200 00:17:12.289 + PARTED='parted -s' 00:17:12.289 + SGDISK=sgdisk 00:17:12.289 + echo 'Partitioning /dev/loop200' 00:17:12.289 + parted -s /dev/loop200 mktable gpt 00:17:12.289 + sleep 2 00:17:14.190 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:17:14.190 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:17:14.190 Setting name on /dev/loop200 00:17:14.190 + partno=0 00:17:14.190 + echo 'Setting name on /dev/loop200' 00:17:14.190 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:17:15.124 Warning: The kernel is still using the old partition table. 00:17:15.124 The new table will be used at the next reboot or after you 00:17:15.124 run partprobe(8) or kpartx(8) 00:17:15.124 The operation has completed successfully. 00:17:15.124 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:17:16.056 Warning: The kernel is still using the old partition table. 00:17:16.056 The new table will be used at the next reboot or after you 00:17:16.056 run partprobe(8) or kpartx(8) 00:17:16.056 The operation has completed successfully. 00:17:16.056 + kpartx /dev/loop200 00:17:16.056 loop200p1 : 0 4192256 /dev/loop200 2048 00:17:16.056 loop200p2 : 0 4192256 /dev/loop200 4194304 00:17:16.056 ++ ceph -v 00:17:16.056 ++ awk '{print $3}' 00:17:16.314 + ceph_version=17.2.7 00:17:16.314 + ceph_maj=17 00:17:16.314 + '[' 17 -gt 12 ']' 00:17:16.314 + update_config=true 00:17:16.314 + rm -f /var/log/ceph/ceph-mon.a.log 00:17:16.314 + set_min_mon_release='--set-min-mon-release 14' 00:17:16.314 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:17:16.314 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:17:16.314 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:17:16.314 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:17:16.314 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:17:16.314 = sectsz=512 attr=2, projid32bit=1 00:17:16.314 = crc=1 finobt=1, sparse=1, rmapbt=0 00:17:16.314 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:17:16.314 data = bsize=4096 blocks=524032, imaxpct=25 00:17:16.314 = sunit=0 swidth=0 blks 00:17:16.314 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:17:16.314 log =internal log bsize=4096 blocks=16384, version=2 00:17:16.314 = sectsz=512 sunit=0 blks, lazy-count=1 00:17:16.314 realtime =none extsz=4096 blocks=0, rtextents=0 00:17:16.314 Discarding blocks...Done. 00:17:16.314 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:17:16.314 + cat 00:17:16.314 + rm -rf '/var/tmp/ceph/mon.a/*' 00:17:16.314 + mkdir -p /var/tmp/ceph/mon.a 00:17:16.314 + mkdir -p /var/tmp/ceph/pid 00:17:16.314 + rm -f /etc/ceph/ceph.client.admin.keyring 00:17:16.314 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:17:16.314 creating /var/tmp/ceph/keyring 00:17:16.314 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:17:16.314 + monmaptool --create --clobber --add a 127.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:17:16.314 monmaptool: monmap file /var/tmp/ceph/monmap 00:17:16.314 monmaptool: generated fsid ea9eddb0-0219-429f-b9b0-b6adfb6fbee3 00:17:16.314 setting min_mon_release = octopus 00:17:16.314 epoch 0 00:17:16.314 fsid ea9eddb0-0219-429f-b9b0-b6adfb6fbee3 00:17:16.314 last_changed 2024-07-25T17:08:08.741707+0000 00:17:16.314 created 2024-07-25T17:08:08.741707+0000 00:17:16.314 min_mon_release 15 (octopus) 00:17:16.314 election_strategy: 1 00:17:16.314 0: v2:127.0.0.1:12046/0 mon.a 00:17:16.314 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:17:16.314 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:17:16.571 + '[' true = true ']' 00:17:16.571 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:17:16.571 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:17:16.571 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:17:16.571 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:17:16.571 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:17:16.571 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:17:16.571 ++ hostname 00:17:16.571 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:17:16.571 + true 00:17:16.571 + '[' true = true ']' 00:17:16.571 + ceph-conf --name mon.a --show-config-value log_file 00:17:16.571 /var/log/ceph/ceph-mon.a.log 00:17:16.571 ++ ceph -s 00:17:16.571 ++ grep id 00:17:16.571 ++ awk '{print $2}' 00:17:16.829 + fsid=ea9eddb0-0219-429f-b9b0-b6adfb6fbee3 00:17:16.829 + sed -i 's/perf = true/perf = true\n\tfsid = ea9eddb0-0219-429f-b9b0-b6adfb6fbee3 \n/g' /var/tmp/ceph/ceph.conf 00:17:16.829 + (( ceph_maj < 18 )) 00:17:16.829 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:17:16.829 + cat /var/tmp/ceph/ceph.conf 00:17:16.829 [global] 00:17:16.829 debug_lockdep = 0/0 00:17:16.829 debug_context = 0/0 00:17:16.829 debug_crush = 0/0 00:17:16.829 debug_buffer = 0/0 00:17:16.829 debug_timer = 0/0 00:17:16.829 debug_filer = 0/0 00:17:16.829 debug_objecter = 0/0 00:17:16.829 debug_rados = 0/0 00:17:16.829 debug_rbd = 0/0 00:17:16.829 debug_ms = 0/0 00:17:16.829 debug_monc = 0/0 00:17:16.829 debug_tp = 0/0 00:17:16.829 debug_auth = 0/0 00:17:16.829 debug_finisher = 0/0 00:17:16.829 debug_heartbeatmap = 0/0 00:17:16.829 debug_perfcounter = 0/0 00:17:16.829 debug_asok = 0/0 00:17:16.829 debug_throttle = 0/0 00:17:16.829 debug_mon = 0/0 00:17:16.829 debug_paxos = 0/0 00:17:16.829 debug_rgw = 0/0 00:17:16.829 00:17:16.829 perf = true 00:17:16.829 osd objectstore = filestore 00:17:16.829 00:17:16.829 fsid = ea9eddb0-0219-429f-b9b0-b6adfb6fbee3 00:17:16.829 00:17:16.829 mutex_perf_counter = false 00:17:16.829 throttler_perf_counter = false 00:17:16.829 rbd cache = false 00:17:16.829 mon_allow_pool_delete = true 00:17:16.829 00:17:16.829 osd_pool_default_size = 1 00:17:16.829 00:17:16.829 [mon] 00:17:16.829 mon_max_pool_pg_num=166496 00:17:16.829 mon_osd_max_split_count = 10000 00:17:16.829 mon_pg_warn_max_per_osd = 10000 00:17:16.829 00:17:16.829 [osd] 00:17:16.829 osd_op_threads = 64 00:17:16.829 filestore_queue_max_ops=5000 00:17:16.829 filestore_queue_committing_max_ops=5000 00:17:16.829 journal_max_write_entries=1000 00:17:16.829 journal_queue_max_ops=3000 00:17:16.829 objecter_inflight_ops=102400 00:17:16.829 filestore_wbthrottle_enable=false 00:17:16.829 filestore_queue_max_bytes=1048576000 00:17:16.829 filestore_queue_committing_max_bytes=1048576000 00:17:16.829 journal_max_write_bytes=1048576000 00:17:16.829 journal_queue_max_bytes=1048576000 00:17:16.829 ms_dispatch_throttle_bytes=1048576000 00:17:16.829 objecter_inflight_op_bytes=1048576000 00:17:16.829 filestore_max_sync_interval=10 00:17:16.829 osd_client_message_size_cap = 0 00:17:16.829 osd_client_message_cap = 0 00:17:16.829 osd_enable_op_tracker = false 00:17:16.829 filestore_fd_cache_size = 10240 00:17:16.829 filestore_fd_cache_shards = 64 00:17:16.829 filestore_op_threads = 16 00:17:16.829 osd_op_num_shards = 48 00:17:16.829 osd_op_num_threads_per_shard = 2 00:17:16.829 osd_pg_object_context_cache_count = 10240 00:17:16.829 filestore_odsync_write = True 00:17:16.829 journal_dynamic_throttle = True 00:17:16.829 00:17:16.829 [osd.0] 00:17:16.829 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:17:16.829 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:17:16.829 00:17:16.829 # add mon address 00:17:16.829 [mon.a] 00:17:16.829 mon addr = v2:127.0.0.1:12046 00:17:16.830 + i=0 00:17:16.830 + mkdir -p /var/tmp/ceph/mnt 00:17:17.086 ++ uuidgen 00:17:17.086 + uuid=025f417f-0afc-4d75-9bf3-5cc75b484d72 00:17:17.086 + ceph -c /var/tmp/ceph/ceph.conf osd create 025f417f-0afc-4d75-9bf3-5cc75b484d72 0 00:17:17.086 0 00:17:17.344 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid 025f417f-0afc-4d75-9bf3-5cc75b484d72 --check-needs-journal --no-mon-config 00:17:17.344 2024-07-25T17:08:09.592+0000 7fa821663400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:17:17.344 2024-07-25T17:08:09.592+0000 7fa821663400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:17:17.344 2024-07-25T17:08:09.636+0000 7fa821663400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 025f417f-0afc-4d75-9bf3-5cc75b484d72, invalid (someone else's?) journal 00:17:17.344 2024-07-25T17:08:09.657+0000 7fa821663400 -1 journal do_read_entry(4096): bad header magic 00:17:17.344 2024-07-25T17:08:09.657+0000 7fa821663400 -1 journal do_read_entry(4096): bad header magic 00:17:17.344 ++ hostname 00:17:17.344 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:17:18.720 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:17:18.720 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:17:18.979 added key for osd.0 00:17:18.979 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:17:19.238 + class_dir=/lib64/rados-classes 00:17:19.238 + [[ -e /lib64/rados-classes ]] 00:17:19.238 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:17:19.496 + pkill -9 ceph-osd 00:17:19.496 + true 00:17:19.496 + sleep 2 00:17:21.397 + mkdir -p /var/tmp/ceph/pid 00:17:21.397 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:17:21.397 2024-07-25T17:08:13.818+0000 7fe82825c400 -1 Falling back to public interface 00:17:21.397 2024-07-25T17:08:13.862+0000 7fe82825c400 -1 journal do_read_entry(8192): bad header magic 00:17:21.397 2024-07-25T17:08:13.862+0000 7fe82825c400 -1 journal do_read_entry(8192): bad header magic 00:17:21.656 2024-07-25T17:08:13.868+0000 7fe82825c400 -1 osd.0 0 log_to_monitors true 00:17:21.656 17:08:13 spdkcli_rbd -- common/autotest_common.sh@1027 -- # ceph osd pool create rbd 128 00:17:22.592 pool 'rbd' created 00:17:22.592 17:08:15 spdkcli_rbd -- common/autotest_common.sh@1028 -- # rbd create foo --size 1000 00:17:26.818 17:08:19 spdkcli_rbd -- spdkcli/rbd.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py '"/bdevs/rbd create rbd foo 512'\'' '\''Ceph0'\'' True "/bdevs/rbd' create rbd foo 512 Ceph1 'True 00:17:26.818 timing_exit spdkcli_create_rbd_config 00:17:26.818 00:17:26.818 timing_enter spdkcli_check_match 00:17:26.818 check_match 00:17:26.818 timing_exit spdkcli_check_match 00:17:26.818 00:17:26.818 timing_enter spdkcli_clear_rbd_config 00:17:26.818 /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py "/bdevs/rbd' delete Ceph0 Ceph0 '"/bdevs/rbd delete_all'\'' '\''Ceph1'\'' ' 00:17:27.385 Executing command: [' ', True] 00:17:27.385 17:08:19 spdkcli_rbd -- spdkcli/rbd.sh@31 -- # rbd_cleanup 00:17:27.385 17:08:19 spdkcli_rbd -- common/autotest_common.sh@1033 -- # hash ceph 00:17:27.385 17:08:19 spdkcli_rbd -- common/autotest_common.sh@1034 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:17:27.385 + base_dir=/var/tmp/ceph 00:17:27.385 + image=/var/tmp/ceph/ceph_raw.img 00:17:27.385 + dev=/dev/loop200 00:17:27.385 + pkill -9 ceph 00:17:27.385 + sleep 3 00:17:30.672 + umount /dev/loop200p2 00:17:30.672 + losetup -d /dev/loop200 00:17:30.672 + rm -rf /var/tmp/ceph 00:17:30.672 17:08:22 spdkcli_rbd -- common/autotest_common.sh@1035 -- # rm -f /var/tmp/ceph_raw.img 00:17:30.672 17:08:22 spdkcli_rbd -- spdkcli/rbd.sh@32 -- # timing_exit spdkcli_clear_rbd_config 00:17:30.672 17:08:22 spdkcli_rbd -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:30.672 17:08:22 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:17:30.672 17:08:22 spdkcli_rbd -- spdkcli/rbd.sh@34 -- # killprocess 76748 00:17:30.672 17:08:22 spdkcli_rbd -- common/autotest_common.sh@950 -- # '[' -z 76748 ']' 00:17:30.672 17:08:22 spdkcli_rbd -- common/autotest_common.sh@954 -- # kill -0 76748 00:17:30.672 17:08:22 spdkcli_rbd -- common/autotest_common.sh@955 -- # uname 00:17:30.672 17:08:22 spdkcli_rbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:30.672 17:08:22 spdkcli_rbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76748 00:17:30.672 17:08:22 spdkcli_rbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:30.672 17:08:22 spdkcli_rbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:30.672 killing process with pid 76748 00:17:30.672 17:08:22 spdkcli_rbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76748' 00:17:30.672 17:08:22 spdkcli_rbd -- common/autotest_common.sh@969 -- # kill 76748 00:17:30.673 17:08:22 spdkcli_rbd -- common/autotest_common.sh@974 -- # wait 76748 00:17:30.931 17:08:23 spdkcli_rbd -- spdkcli/rbd.sh@1 -- # rbd_cleanup 00:17:30.931 17:08:23 spdkcli_rbd -- common/autotest_common.sh@1033 -- # hash ceph 00:17:30.931 17:08:23 spdkcli_rbd -- common/autotest_common.sh@1034 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:17:30.931 + base_dir=/var/tmp/ceph 00:17:30.931 + image=/var/tmp/ceph/ceph_raw.img 00:17:30.931 + dev=/dev/loop200 00:17:30.931 + pkill -9 ceph 00:17:30.931 + sleep 3 00:17:34.219 + umount /dev/loop200p2 00:17:34.219 umount: /dev/loop200p2: no mount point specified. 00:17:34.219 + losetup -d /dev/loop200 00:17:34.219 losetup: /dev/loop200: detach failed: No such device or address 00:17:34.219 + rm -rf /var/tmp/ceph 00:17:34.219 17:08:26 spdkcli_rbd -- common/autotest_common.sh@1035 -- # rm -f /var/tmp/ceph_raw.img 00:17:34.219 17:08:26 spdkcli_rbd -- spdkcli/rbd.sh@1 -- # cleanup 00:17:34.219 17:08:26 spdkcli_rbd -- spdkcli/common.sh@10 -- # '[' -n 76748 ']' 00:17:34.219 17:08:26 spdkcli_rbd -- spdkcli/common.sh@11 -- # killprocess 76748 00:17:34.219 17:08:26 spdkcli_rbd -- common/autotest_common.sh@950 -- # '[' -z 76748 ']' 00:17:34.219 17:08:26 spdkcli_rbd -- common/autotest_common.sh@954 -- # kill -0 76748 00:17:34.219 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76748) - No such process 00:17:34.219 Process with pid 76748 is not found 00:17:34.219 17:08:26 spdkcli_rbd -- common/autotest_common.sh@977 -- # echo 'Process with pid 76748 is not found' 00:17:34.219 17:08:26 spdkcli_rbd -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:17:34.219 17:08:26 spdkcli_rbd -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:17:34.219 17:08:26 spdkcli_rbd -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:17:34.219 17:08:26 spdkcli_rbd -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_rbd.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:17:34.219 00:17:34.219 real 0m29.555s 00:17:34.219 user 0m54.421s 00:17:34.219 sys 0m1.624s 00:17:34.219 17:08:26 spdkcli_rbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:34.219 17:08:26 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:17:34.219 ************************************ 00:17:34.219 END TEST spdkcli_rbd 00:17:34.219 ************************************ 00:17:34.219 17:08:26 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:17:34.219 17:08:26 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:17:34.219 17:08:26 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:17:34.219 17:08:26 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:17:34.219 17:08:26 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:17:34.219 17:08:26 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:17:34.219 17:08:26 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:17:34.219 17:08:26 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:17:34.219 17:08:26 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:17:34.219 17:08:26 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:17:34.219 17:08:26 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:17:34.219 17:08:26 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:17:34.219 17:08:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:34.219 17:08:26 -- common/autotest_common.sh@10 -- # set +x 00:17:34.219 17:08:26 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:17:34.219 17:08:26 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:17:34.219 17:08:26 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:17:34.219 17:08:26 -- common/autotest_common.sh@10 -- # set +x 00:17:36.819 INFO: APP EXITING 00:17:36.819 INFO: killing all VMs 00:17:36.819 INFO: killing vhost app 00:17:36.819 INFO: EXIT DONE 00:17:37.079 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:37.079 Waiting for block devices as requested 00:17:37.079 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:37.338 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:38.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:38.283 Cleaning 00:17:38.283 Removing: /var/run/dpdk/spdk0/config 00:17:38.283 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:38.283 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:38.283 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:38.283 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:38.283 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:38.283 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:38.283 Removing: /var/run/dpdk/spdk1/config 00:17:38.283 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:17:38.283 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:17:38.283 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:17:38.283 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:17:38.283 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:17:38.283 Removing: /var/run/dpdk/spdk1/hugepage_info 00:17:38.283 Removing: /dev/shm/iscsi_trace.pid67709 00:17:38.283 Removing: /dev/shm/spdk_tgt_trace.pid58761 00:17:38.283 Removing: /var/run/dpdk/spdk0 00:17:38.283 Removing: /var/run/dpdk/spdk1 00:17:38.283 Removing: /var/run/dpdk/spdk_pid58616 00:17:38.283 Removing: /var/run/dpdk/spdk_pid58761 00:17:38.283 Removing: /var/run/dpdk/spdk_pid58959 00:17:38.283 Removing: /var/run/dpdk/spdk_pid59040 00:17:38.283 Removing: /var/run/dpdk/spdk_pid59068 00:17:38.283 Removing: /var/run/dpdk/spdk_pid59177 00:17:38.283 Removing: /var/run/dpdk/spdk_pid59195 00:17:38.283 Removing: /var/run/dpdk/spdk_pid59313 00:17:38.283 Removing: /var/run/dpdk/spdk_pid59489 00:17:38.283 Removing: /var/run/dpdk/spdk_pid59670 00:17:38.283 Removing: /var/run/dpdk/spdk_pid59734 00:17:38.283 Removing: /var/run/dpdk/spdk_pid59805 00:17:38.283 Removing: /var/run/dpdk/spdk_pid59896 00:17:38.283 Removing: /var/run/dpdk/spdk_pid59965 00:17:38.283 Removing: /var/run/dpdk/spdk_pid60010 00:17:38.283 Removing: /var/run/dpdk/spdk_pid60040 00:17:38.283 Removing: /var/run/dpdk/spdk_pid60099 00:17:38.283 Removing: /var/run/dpdk/spdk_pid60201 00:17:38.283 Removing: /var/run/dpdk/spdk_pid60612 00:17:38.283 Removing: /var/run/dpdk/spdk_pid60664 00:17:38.283 Removing: /var/run/dpdk/spdk_pid60715 00:17:38.283 Removing: /var/run/dpdk/spdk_pid60725 00:17:38.283 Removing: /var/run/dpdk/spdk_pid60787 00:17:38.283 Removing: /var/run/dpdk/spdk_pid60803 00:17:38.283 Removing: /var/run/dpdk/spdk_pid60870 00:17:38.283 Removing: /var/run/dpdk/spdk_pid60886 00:17:38.283 Removing: /var/run/dpdk/spdk_pid60926 00:17:38.283 Removing: /var/run/dpdk/spdk_pid60944 00:17:38.283 Removing: /var/run/dpdk/spdk_pid60984 00:17:38.283 Removing: /var/run/dpdk/spdk_pid61002 00:17:38.283 Removing: /var/run/dpdk/spdk_pid61119 00:17:38.283 Removing: /var/run/dpdk/spdk_pid61160 00:17:38.283 Removing: /var/run/dpdk/spdk_pid61229 00:17:38.283 Removing: /var/run/dpdk/spdk_pid61550 00:17:38.283 Removing: /var/run/dpdk/spdk_pid61574 00:17:38.283 Removing: /var/run/dpdk/spdk_pid61593 00:17:38.283 Removing: /var/run/dpdk/spdk_pid61637 00:17:38.283 Removing: /var/run/dpdk/spdk_pid61647 00:17:38.283 Removing: /var/run/dpdk/spdk_pid61664 00:17:38.283 Removing: /var/run/dpdk/spdk_pid61686 00:17:38.283 Removing: /var/run/dpdk/spdk_pid61695 00:17:38.283 Removing: /var/run/dpdk/spdk_pid61741 00:17:38.283 Removing: /var/run/dpdk/spdk_pid61761 00:17:38.283 Removing: /var/run/dpdk/spdk_pid61806 00:17:38.283 Removing: /var/run/dpdk/spdk_pid61898 00:17:38.283 Removing: /var/run/dpdk/spdk_pid62652 00:17:38.283 Removing: /var/run/dpdk/spdk_pid63095 00:17:38.283 Removing: /var/run/dpdk/spdk_pid63366 00:17:38.283 Removing: /var/run/dpdk/spdk_pid63667 00:17:38.283 Removing: /var/run/dpdk/spdk_pid63906 00:17:38.283 Removing: /var/run/dpdk/spdk_pid64470 00:17:38.283 Removing: /var/run/dpdk/spdk_pid65918 00:17:38.283 Removing: /var/run/dpdk/spdk_pid66615 00:17:38.544 Removing: /var/run/dpdk/spdk_pid67372 00:17:38.544 Removing: /var/run/dpdk/spdk_pid67405 00:17:38.544 Removing: /var/run/dpdk/spdk_pid67709 00:17:38.544 Removing: /var/run/dpdk/spdk_pid68975 00:17:38.544 Removing: /var/run/dpdk/spdk_pid69351 00:17:38.544 Removing: /var/run/dpdk/spdk_pid69397 00:17:38.544 Removing: /var/run/dpdk/spdk_pid69794 00:17:38.544 Removing: /var/run/dpdk/spdk_pid73013 00:17:38.544 Removing: /var/run/dpdk/spdk_pid73311 00:17:38.544 Removing: /var/run/dpdk/spdk_pid73355 00:17:38.544 Removing: /var/run/dpdk/spdk_pid73433 00:17:38.544 Removing: /var/run/dpdk/spdk_pid73495 00:17:38.544 Removing: /var/run/dpdk/spdk_pid73560 00:17:38.544 Removing: /var/run/dpdk/spdk_pid73726 00:17:38.544 Removing: /var/run/dpdk/spdk_pid73770 00:17:38.544 Removing: /var/run/dpdk/spdk_pid73785 00:17:38.544 Removing: /var/run/dpdk/spdk_pid73812 00:17:38.544 Removing: /var/run/dpdk/spdk_pid73827 00:17:38.544 Removing: /var/run/dpdk/spdk_pid73899 00:17:38.544 Removing: /var/run/dpdk/spdk_pid73942 00:17:38.544 Removing: /var/run/dpdk/spdk_pid74151 00:17:38.544 Removing: /var/run/dpdk/spdk_pid74454 00:17:38.544 Removing: /var/run/dpdk/spdk_pid74703 00:17:38.544 Removing: /var/run/dpdk/spdk_pid75588 00:17:38.544 Removing: /var/run/dpdk/spdk_pid75632 00:17:38.544 Removing: /var/run/dpdk/spdk_pid75913 00:17:38.544 Removing: /var/run/dpdk/spdk_pid76102 00:17:38.544 Removing: /var/run/dpdk/spdk_pid76270 00:17:38.544 Removing: /var/run/dpdk/spdk_pid76449 00:17:38.544 Removing: /var/run/dpdk/spdk_pid76549 00:17:38.544 Removing: /var/run/dpdk/spdk_pid76609 00:17:38.544 Removing: /var/run/dpdk/spdk_pid76636 00:17:38.544 Removing: /var/run/dpdk/spdk_pid76748 00:17:38.544 Clean 00:17:38.544 17:08:30 -- common/autotest_common.sh@1451 -- # return 0 00:17:38.544 17:08:30 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:17:38.544 17:08:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:38.544 17:08:30 -- common/autotest_common.sh@10 -- # set +x 00:17:38.544 17:08:30 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:17:38.544 17:08:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:38.544 17:08:30 -- common/autotest_common.sh@10 -- # set +x 00:17:38.803 17:08:31 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:38.803 17:08:31 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:38.803 17:08:31 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:38.803 17:08:31 -- spdk/autotest.sh@395 -- # hash lcov 00:17:38.803 17:08:31 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:17:38.803 17:08:31 -- spdk/autotest.sh@397 -- # hostname 00:17:38.803 17:08:31 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:38.803 geninfo: WARNING: invalid characters removed from testname! 00:18:05.357 17:08:54 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:05.357 17:08:57 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:07.892 17:08:59 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:09.799 17:09:01 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:11.701 17:09:03 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:14.248 17:09:06 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:16.155 17:09:08 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:18:16.155 17:09:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.155 17:09:08 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:18:16.155 17:09:08 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.155 17:09:08 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.155 17:09:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.155 17:09:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.155 17:09:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.155 17:09:08 -- paths/export.sh@5 -- $ export PATH 00:18:16.155 17:09:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.155 17:09:08 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:18:16.155 17:09:08 -- common/autobuild_common.sh@447 -- $ date +%s 00:18:16.155 17:09:08 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721927348.XXXXXX 00:18:16.155 17:09:08 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721927348.pjBbUN 00:18:16.155 17:09:08 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:18:16.155 17:09:08 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:18:16.155 17:09:08 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:18:16.155 17:09:08 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:18:16.155 17:09:08 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:18:16.155 17:09:08 -- common/autobuild_common.sh@463 -- $ get_config_params 00:18:16.155 17:09:08 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:18:16.155 17:09:08 -- common/autotest_common.sh@10 -- $ set +x 00:18:16.155 17:09:08 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-coverage --with-ublk' 00:18:16.155 17:09:08 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:18:16.155 17:09:08 -- pm/common@17 -- $ local monitor 00:18:16.155 17:09:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:16.155 17:09:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:16.155 17:09:08 -- pm/common@25 -- $ sleep 1 00:18:16.155 17:09:08 -- pm/common@21 -- $ date +%s 00:18:16.155 17:09:08 -- pm/common@21 -- $ date +%s 00:18:16.155 17:09:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721927348 00:18:16.155 17:09:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721927348 00:18:16.155 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721927348_collect-vmstat.pm.log 00:18:16.155 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721927348_collect-cpu-load.pm.log 00:18:17.091 17:09:09 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:18:17.091 17:09:09 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:18:17.091 17:09:09 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:18:17.091 17:09:09 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:18:17.091 17:09:09 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:18:17.091 17:09:09 -- spdk/autopackage.sh@19 -- $ timing_finish 00:18:17.091 17:09:09 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:17.091 17:09:09 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:18:17.091 17:09:09 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:17.091 17:09:09 -- spdk/autopackage.sh@20 -- $ exit 0 00:18:17.091 17:09:09 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:18:17.091 17:09:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:18:17.091 17:09:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:18:17.091 17:09:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:17.091 17:09:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:18:17.091 17:09:09 -- pm/common@44 -- $ pid=79352 00:18:17.091 17:09:09 -- pm/common@50 -- $ kill -TERM 79352 00:18:17.091 17:09:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:17.091 17:09:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:18:17.091 17:09:09 -- pm/common@44 -- $ pid=79354 00:18:17.091 17:09:09 -- pm/common@50 -- $ kill -TERM 79354 00:18:17.091 + [[ -n 5108 ]] 00:18:17.091 + sudo kill 5108 00:18:17.359 [Pipeline] } 00:18:17.380 [Pipeline] // timeout 00:18:17.387 [Pipeline] } 00:18:17.407 [Pipeline] // stage 00:18:17.414 [Pipeline] } 00:18:17.432 [Pipeline] // catchError 00:18:17.444 [Pipeline] stage 00:18:17.447 [Pipeline] { (Stop VM) 00:18:17.462 [Pipeline] sh 00:18:17.742 + vagrant halt 00:18:21.142 ==> default: Halting domain... 00:18:27.716 [Pipeline] sh 00:18:27.995 + vagrant destroy -f 00:18:31.279 ==> default: Removing domain... 00:18:31.292 [Pipeline] sh 00:18:31.571 + mv output /var/jenkins/workspace/iscsi-vg-autotest_2/output 00:18:31.585 [Pipeline] } 00:18:31.599 [Pipeline] // stage 00:18:31.604 [Pipeline] } 00:18:31.617 [Pipeline] // dir 00:18:31.621 [Pipeline] } 00:18:31.634 [Pipeline] // wrap 00:18:31.639 [Pipeline] } 00:18:31.650 [Pipeline] // catchError 00:18:31.656 [Pipeline] stage 00:18:31.657 [Pipeline] { (Epilogue) 00:18:31.666 [Pipeline] sh 00:18:31.946 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:37.231 [Pipeline] catchError 00:18:37.233 [Pipeline] { 00:18:37.248 [Pipeline] sh 00:18:37.528 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:37.528 Artifacts sizes are good 00:18:37.536 [Pipeline] } 00:18:37.554 [Pipeline] // catchError 00:18:37.566 [Pipeline] archiveArtifacts 00:18:37.573 Archiving artifacts 00:18:38.769 [Pipeline] cleanWs 00:18:38.778 [WS-CLEANUP] Deleting project workspace... 00:18:38.778 [WS-CLEANUP] Deferred wipeout is used... 00:18:38.783 [WS-CLEANUP] done 00:18:38.785 [Pipeline] } 00:18:38.800 [Pipeline] // stage 00:18:38.805 [Pipeline] } 00:18:38.819 [Pipeline] // node 00:18:38.823 [Pipeline] End of Pipeline 00:18:38.862 Finished: SUCCESS