00:00:00.001 Started by upstream project "autotest-nightly" build number 3920 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3295 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.130 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/iscsi-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.131 The recommended git tool is: git 00:00:00.131 using credential 00000000-0000-0000-0000-000000000002 00:00:00.135 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/iscsi-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.180 Fetching changes from the remote Git repository 00:00:00.181 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.220 Using shallow fetch with depth 1 00:00:00.220 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.220 > git --version # timeout=10 00:00:00.259 > git --version # 'git version 2.39.2' 00:00:00.259 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.283 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.283 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.363 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.373 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.383 Checking out Revision c396a3cd44e4090a57fb151c18fefbf4a9bd324b (FETCH_HEAD) 00:00:08.383 > git config core.sparsecheckout # timeout=10 00:00:08.394 > git read-tree -mu HEAD # timeout=10 00:00:08.410 > git checkout -f c396a3cd44e4090a57fb151c18fefbf4a9bd324b # timeout=5 00:00:08.430 Commit message: "jenkins/jjb-config: Use freebsd14 for the pkgdep-freebsd job" 00:00:08.430 > git rev-list --no-walk c396a3cd44e4090a57fb151c18fefbf4a9bd324b # timeout=10 00:00:08.527 [Pipeline] Start of Pipeline 00:00:08.543 [Pipeline] library 00:00:08.545 Loading library shm_lib@master 00:00:08.545 Library shm_lib@master is cached. Copying from home. 00:00:08.558 [Pipeline] node 00:00:08.567 Running on VM-host-WFP7 in /var/jenkins/workspace/iscsi-vg-autotest 00:00:08.568 [Pipeline] { 00:00:08.576 [Pipeline] catchError 00:00:08.577 [Pipeline] { 00:00:08.587 [Pipeline] wrap 00:00:08.594 [Pipeline] { 00:00:08.600 [Pipeline] stage 00:00:08.602 [Pipeline] { (Prologue) 00:00:08.619 [Pipeline] echo 00:00:08.620 Node: VM-host-WFP7 00:00:08.625 [Pipeline] cleanWs 00:00:08.636 [WS-CLEANUP] Deleting project workspace... 00:00:08.636 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.642 [WS-CLEANUP] done 00:00:08.957 [Pipeline] setCustomBuildProperty 00:00:09.031 [Pipeline] httpRequest 00:00:09.060 [Pipeline] echo 00:00:09.061 Sorcerer 10.211.164.101 is alive 00:00:09.069 [Pipeline] httpRequest 00:00:09.073 HttpMethod: GET 00:00:09.073 URL: http://10.211.164.101/packages/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:09.074 Sending request to url: http://10.211.164.101/packages/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:09.089 Response Code: HTTP/1.1 200 OK 00:00:09.090 Success: Status code 200 is in the accepted range: 200,404 00:00:09.090 Saving response body to /var/jenkins/workspace/iscsi-vg-autotest/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:13.243 [Pipeline] sh 00:00:13.544 + tar --no-same-owner -xf jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:13.558 [Pipeline] httpRequest 00:00:13.584 [Pipeline] echo 00:00:13.586 Sorcerer 10.211.164.101 is alive 00:00:13.594 [Pipeline] httpRequest 00:00:13.598 HttpMethod: GET 00:00:13.599 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:13.599 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:13.622 Response Code: HTTP/1.1 200 OK 00:00:13.623 Success: Status code 200 is in the accepted range: 200,404 00:00:13.623 Saving response body to /var/jenkins/workspace/iscsi-vg-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:57.912 [Pipeline] sh 00:01:58.190 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:02:00.733 [Pipeline] sh 00:02:01.041 + git -C spdk log --oneline -n5 00:02:01.041 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:02:01.041 fc2398dfa raid: clear base bdev configure_cb after executing 00:02:01.041 5558f3f50 raid: complete bdev_raid_create after sb is written 00:02:01.041 d005e023b raid: fix empty slot not updated in sb after resize 00:02:01.041 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:02:01.058 [Pipeline] writeFile 00:02:01.067 [Pipeline] sh 00:02:01.343 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:01.354 [Pipeline] sh 00:02:01.637 + cat autorun-spdk.conf 00:02:01.637 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.637 SPDK_TEST_ISCSI_INITIATOR=1 00:02:01.637 SPDK_TEST_ISCSI=1 00:02:01.637 SPDK_TEST_RBD=1 00:02:01.637 SPDK_RUN_ASAN=1 00:02:01.637 SPDK_RUN_UBSAN=1 00:02:01.637 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:01.644 RUN_NIGHTLY=1 00:02:01.646 [Pipeline] } 00:02:01.659 [Pipeline] // stage 00:02:01.670 [Pipeline] stage 00:02:01.672 [Pipeline] { (Run VM) 00:02:01.682 [Pipeline] sh 00:02:01.961 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:01.961 + echo 'Start stage prepare_nvme.sh' 00:02:01.961 Start stage prepare_nvme.sh 00:02:01.961 + [[ -n 5 ]] 00:02:01.961 + disk_prefix=ex5 00:02:01.961 + [[ -n /var/jenkins/workspace/iscsi-vg-autotest ]] 00:02:01.961 + [[ -e /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf ]] 00:02:01.961 + source /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf 00:02:01.961 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.961 ++ SPDK_TEST_ISCSI_INITIATOR=1 00:02:01.961 ++ SPDK_TEST_ISCSI=1 00:02:01.961 ++ SPDK_TEST_RBD=1 00:02:01.961 ++ SPDK_RUN_ASAN=1 00:02:01.961 ++ SPDK_RUN_UBSAN=1 00:02:01.961 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:01.961 ++ RUN_NIGHTLY=1 00:02:01.961 + cd /var/jenkins/workspace/iscsi-vg-autotest 00:02:01.961 + nvme_files=() 00:02:01.961 + declare -A nvme_files 00:02:01.961 + backend_dir=/var/lib/libvirt/images/backends 00:02:01.961 + nvme_files['nvme.img']=5G 00:02:01.961 + nvme_files['nvme-cmb.img']=5G 00:02:01.961 + nvme_files['nvme-multi0.img']=4G 00:02:01.961 + nvme_files['nvme-multi1.img']=4G 00:02:01.961 + nvme_files['nvme-multi2.img']=4G 00:02:01.961 + nvme_files['nvme-openstack.img']=8G 00:02:01.961 + nvme_files['nvme-zns.img']=5G 00:02:01.961 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:01.961 + (( SPDK_TEST_FTL == 1 )) 00:02:01.961 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:01.961 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:01.961 + for nvme in "${!nvme_files[@]}" 00:02:01.961 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:02:01.961 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:01.961 + for nvme in "${!nvme_files[@]}" 00:02:01.961 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:02:01.961 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:01.961 + for nvme in "${!nvme_files[@]}" 00:02:01.961 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:02:01.961 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:01.961 + for nvme in "${!nvme_files[@]}" 00:02:01.961 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:02:01.961 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:01.961 + for nvme in "${!nvme_files[@]}" 00:02:01.961 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:02:01.961 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:01.961 + for nvme in "${!nvme_files[@]}" 00:02:01.961 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:02:01.961 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:01.961 + for nvme in "${!nvme_files[@]}" 00:02:01.961 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:02:02.218 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:02.218 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:02:02.477 + echo 'End stage prepare_nvme.sh' 00:02:02.477 End stage prepare_nvme.sh 00:02:02.488 [Pipeline] sh 00:02:02.838 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:02.839 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:02:02.839 00:02:02.839 DIR=/var/jenkins/workspace/iscsi-vg-autotest/spdk/scripts/vagrant 00:02:02.839 SPDK_DIR=/var/jenkins/workspace/iscsi-vg-autotest/spdk 00:02:02.839 VAGRANT_TARGET=/var/jenkins/workspace/iscsi-vg-autotest 00:02:02.839 HELP=0 00:02:02.839 DRY_RUN=0 00:02:02.839 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:02:02.839 NVME_DISKS_TYPE=nvme,nvme, 00:02:02.839 NVME_AUTO_CREATE=0 00:02:02.839 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:02:02.839 NVME_CMB=,, 00:02:02.839 NVME_PMR=,, 00:02:02.839 NVME_ZNS=,, 00:02:02.839 NVME_MS=,, 00:02:02.839 NVME_FDP=,, 00:02:02.839 SPDK_VAGRANT_DISTRO=fedora38 00:02:02.839 SPDK_VAGRANT_VMCPU=10 00:02:02.839 SPDK_VAGRANT_VMRAM=12288 00:02:02.839 SPDK_VAGRANT_PROVIDER=libvirt 00:02:02.839 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:02.839 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:02.839 SPDK_OPENSTACK_NETWORK=0 00:02:02.839 VAGRANT_PACKAGE_BOX=0 00:02:02.839 VAGRANTFILE=/var/jenkins/workspace/iscsi-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:02.839 FORCE_DISTRO=true 00:02:02.839 VAGRANT_BOX_VERSION= 00:02:02.839 EXTRA_VAGRANTFILES= 00:02:02.839 NIC_MODEL=virtio 00:02:02.839 00:02:02.839 mkdir: created directory '/var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt' 00:02:02.839 /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt /var/jenkins/workspace/iscsi-vg-autotest 00:02:05.370 Bringing machine 'default' up with 'libvirt' provider... 00:02:05.633 ==> default: Creating image (snapshot of base box volume). 00:02:05.633 ==> default: Creating domain with the following settings... 00:02:05.633 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721897052_6c27c7a4e2ebf5f7a7a7 00:02:05.633 ==> default: -- Domain type: kvm 00:02:05.633 ==> default: -- Cpus: 10 00:02:05.633 ==> default: -- Feature: acpi 00:02:05.633 ==> default: -- Feature: apic 00:02:05.633 ==> default: -- Feature: pae 00:02:05.633 ==> default: -- Memory: 12288M 00:02:05.633 ==> default: -- Memory Backing: hugepages: 00:02:05.633 ==> default: -- Management MAC: 00:02:05.633 ==> default: -- Loader: 00:02:05.633 ==> default: -- Nvram: 00:02:05.633 ==> default: -- Base box: spdk/fedora38 00:02:05.633 ==> default: -- Storage pool: default 00:02:05.633 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721897052_6c27c7a4e2ebf5f7a7a7.img (20G) 00:02:05.633 ==> default: -- Volume Cache: default 00:02:05.633 ==> default: -- Kernel: 00:02:05.633 ==> default: -- Initrd: 00:02:05.633 ==> default: -- Graphics Type: vnc 00:02:05.633 ==> default: -- Graphics Port: -1 00:02:05.633 ==> default: -- Graphics IP: 127.0.0.1 00:02:05.633 ==> default: -- Graphics Password: Not defined 00:02:05.633 ==> default: -- Video Type: cirrus 00:02:05.633 ==> default: -- Video VRAM: 9216 00:02:05.633 ==> default: -- Sound Type: 00:02:05.633 ==> default: -- Keymap: en-us 00:02:05.633 ==> default: -- TPM Path: 00:02:05.633 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:05.633 ==> default: -- Command line args: 00:02:05.633 ==> default: -> value=-device, 00:02:05.633 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:05.633 ==> default: -> value=-drive, 00:02:05.633 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:02:05.633 ==> default: -> value=-device, 00:02:05.633 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:05.633 ==> default: -> value=-device, 00:02:05.633 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:05.633 ==> default: -> value=-drive, 00:02:05.633 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:05.633 ==> default: -> value=-device, 00:02:05.633 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:05.633 ==> default: -> value=-drive, 00:02:05.633 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:05.633 ==> default: -> value=-device, 00:02:05.633 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:05.633 ==> default: -> value=-drive, 00:02:05.633 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:05.633 ==> default: -> value=-device, 00:02:05.633 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:05.900 ==> default: Creating shared folders metadata... 00:02:05.900 ==> default: Starting domain. 00:02:07.805 ==> default: Waiting for domain to get an IP address... 00:02:25.999 ==> default: Waiting for SSH to become available... 00:02:26.990 ==> default: Configuring and enabling network interfaces... 00:02:33.559 default: SSH address: 192.168.121.115:22 00:02:33.559 default: SSH username: vagrant 00:02:33.559 default: SSH auth method: private key 00:02:36.109 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:44.227 ==> default: Mounting SSHFS shared folder... 00:02:46.129 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:46.129 ==> default: Checking Mount.. 00:02:48.030 ==> default: Folder Successfully Mounted! 00:02:48.030 ==> default: Running provisioner: file... 00:02:48.596 default: ~/.gitconfig => .gitconfig 00:02:49.162 00:02:49.162 SUCCESS! 00:02:49.162 00:02:49.162 cd to /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:49.162 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:49.162 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:49.162 00:02:49.171 [Pipeline] } 00:02:49.188 [Pipeline] // stage 00:02:49.196 [Pipeline] dir 00:02:49.197 Running in /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt 00:02:49.199 [Pipeline] { 00:02:49.213 [Pipeline] catchError 00:02:49.215 [Pipeline] { 00:02:49.229 [Pipeline] sh 00:02:49.507 + vagrant ssh-config --host vagrant 00:02:49.507 + sed -ne /^Host/,$p 00:02:49.507 + tee ssh_conf 00:02:52.812 Host vagrant 00:02:52.812 HostName 192.168.121.115 00:02:52.812 User vagrant 00:02:52.812 Port 22 00:02:52.812 UserKnownHostsFile /dev/null 00:02:52.812 StrictHostKeyChecking no 00:02:52.812 PasswordAuthentication no 00:02:52.812 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:52.812 IdentitiesOnly yes 00:02:52.812 LogLevel FATAL 00:02:52.812 ForwardAgent yes 00:02:52.812 ForwardX11 yes 00:02:52.812 00:02:52.826 [Pipeline] withEnv 00:02:52.828 [Pipeline] { 00:02:52.842 [Pipeline] sh 00:02:53.121 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:53.121 source /etc/os-release 00:02:53.121 [[ -e /image.version ]] && img=$(< /image.version) 00:02:53.121 # Minimal, systemd-like check. 00:02:53.121 if [[ -e /.dockerenv ]]; then 00:02:53.121 # Clear garbage from the node's name: 00:02:53.121 # agt-er_autotest_547-896 -> autotest_547-896 00:02:53.121 # $HOSTNAME is the actual container id 00:02:53.121 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:53.121 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:53.121 # We can assume this is a mount from a host where container is running, 00:02:53.121 # so fetch its hostname to easily identify the target swarm worker. 00:02:53.121 container="$(< /etc/hostname) ($agent)" 00:02:53.121 else 00:02:53.121 # Fallback 00:02:53.121 container=$agent 00:02:53.121 fi 00:02:53.121 fi 00:02:53.121 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:53.121 00:02:53.391 [Pipeline] } 00:02:53.410 [Pipeline] // withEnv 00:02:53.418 [Pipeline] setCustomBuildProperty 00:02:53.431 [Pipeline] stage 00:02:53.433 [Pipeline] { (Tests) 00:02:53.449 [Pipeline] sh 00:02:53.730 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:54.001 [Pipeline] sh 00:02:54.279 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:54.551 [Pipeline] timeout 00:02:54.551 Timeout set to expire in 45 min 00:02:54.553 [Pipeline] { 00:02:54.566 [Pipeline] sh 00:02:54.848 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:55.416 HEAD is now at 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:02:55.431 [Pipeline] sh 00:02:55.712 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:55.983 [Pipeline] sh 00:02:56.266 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:56.540 [Pipeline] sh 00:02:56.820 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=iscsi-vg-autotest ./autoruner.sh spdk_repo 00:02:57.079 ++ readlink -f spdk_repo 00:02:57.079 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:57.079 + [[ -n /home/vagrant/spdk_repo ]] 00:02:57.079 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:57.079 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:57.079 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:57.079 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:57.079 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:57.079 + [[ iscsi-vg-autotest == pkgdep-* ]] 00:02:57.079 + cd /home/vagrant/spdk_repo 00:02:57.079 + source /etc/os-release 00:02:57.079 ++ NAME='Fedora Linux' 00:02:57.079 ++ VERSION='38 (Cloud Edition)' 00:02:57.079 ++ ID=fedora 00:02:57.079 ++ VERSION_ID=38 00:02:57.079 ++ VERSION_CODENAME= 00:02:57.079 ++ PLATFORM_ID=platform:f38 00:02:57.079 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:57.079 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:57.079 ++ LOGO=fedora-logo-icon 00:02:57.079 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:57.079 ++ HOME_URL=https://fedoraproject.org/ 00:02:57.079 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:57.079 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:57.079 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:57.079 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:57.079 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:57.079 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:57.079 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:57.079 ++ SUPPORT_END=2024-05-14 00:02:57.079 ++ VARIANT='Cloud Edition' 00:02:57.079 ++ VARIANT_ID=cloud 00:02:57.079 + uname -a 00:02:57.079 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:57.079 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:57.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:57.647 Hugepages 00:02:57.647 node hugesize free / total 00:02:57.647 node0 1048576kB 0 / 0 00:02:57.647 node0 2048kB 0 / 0 00:02:57.647 00:02:57.647 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:57.647 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:57.647 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:57.647 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:57.647 + rm -f /tmp/spdk-ld-path 00:02:57.647 + source autorun-spdk.conf 00:02:57.647 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.647 ++ SPDK_TEST_ISCSI_INITIATOR=1 00:02:57.647 ++ SPDK_TEST_ISCSI=1 00:02:57.647 ++ SPDK_TEST_RBD=1 00:02:57.647 ++ SPDK_RUN_ASAN=1 00:02:57.647 ++ SPDK_RUN_UBSAN=1 00:02:57.647 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:57.647 ++ RUN_NIGHTLY=1 00:02:57.647 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:57.647 + [[ -n '' ]] 00:02:57.647 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:57.647 + for M in /var/spdk/build-*-manifest.txt 00:02:57.647 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:57.647 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:57.647 + for M in /var/spdk/build-*-manifest.txt 00:02:57.647 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:57.647 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:57.647 ++ uname 00:02:57.647 + [[ Linux == \L\i\n\u\x ]] 00:02:57.647 + sudo dmesg -T 00:02:57.647 + sudo dmesg --clear 00:02:57.905 + dmesg_pid=5329 00:02:57.905 + [[ Fedora Linux == FreeBSD ]] 00:02:57.905 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:57.905 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:57.905 + sudo dmesg -Tw 00:02:57.905 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:57.905 + [[ -x /usr/src/fio-static/fio ]] 00:02:57.905 + export FIO_BIN=/usr/src/fio-static/fio 00:02:57.905 + FIO_BIN=/usr/src/fio-static/fio 00:02:57.905 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:57.905 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:57.905 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:57.905 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:57.905 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:57.905 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:57.905 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:57.905 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:57.905 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.905 Test configuration: 00:02:57.905 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.905 SPDK_TEST_ISCSI_INITIATOR=1 00:02:57.905 SPDK_TEST_ISCSI=1 00:02:57.905 SPDK_TEST_RBD=1 00:02:57.905 SPDK_RUN_ASAN=1 00:02:57.905 SPDK_RUN_UBSAN=1 00:02:57.905 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:57.905 RUN_NIGHTLY=1 08:45:04 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:57.905 08:45:04 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:57.905 08:45:04 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:57.905 08:45:04 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:57.905 08:45:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.906 08:45:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.906 08:45:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.906 08:45:04 -- paths/export.sh@5 -- $ export PATH 00:02:57.906 08:45:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.906 08:45:04 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:57.906 08:45:04 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:57.906 08:45:04 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721897104.XXXXXX 00:02:57.906 08:45:04 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721897104.vt0SCi 00:02:57.906 08:45:04 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:57.906 08:45:04 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:57.906 08:45:04 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:57.906 08:45:04 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:57.906 08:45:04 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:57.906 08:45:04 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:57.906 08:45:04 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:02:57.906 08:45:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.906 08:45:04 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:02:57.906 08:45:04 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:57.906 08:45:04 -- pm/common@17 -- $ local monitor 00:02:57.906 08:45:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.906 08:45:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.906 08:45:04 -- pm/common@25 -- $ sleep 1 00:02:57.906 08:45:04 -- pm/common@21 -- $ date +%s 00:02:57.906 08:45:04 -- pm/common@21 -- $ date +%s 00:02:57.906 08:45:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721897104 00:02:57.906 08:45:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721897104 00:02:57.906 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721897104_collect-vmstat.pm.log 00:02:57.906 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721897104_collect-cpu-load.pm.log 00:02:58.842 08:45:05 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:58.842 08:45:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:58.842 08:45:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:58.842 08:45:05 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:58.842 08:45:05 -- spdk/autobuild.sh@16 -- $ date -u 00:02:58.842 Thu Jul 25 08:45:05 AM UTC 2024 00:02:58.842 08:45:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:59.100 v24.09-pre-321-g704257090 00:02:59.100 08:45:05 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:59.100 08:45:05 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:59.100 08:45:05 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:59.100 08:45:05 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:59.100 08:45:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.100 ************************************ 00:02:59.100 START TEST asan 00:02:59.100 ************************************ 00:02:59.100 using asan 00:02:59.100 08:45:05 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:59.100 00:02:59.100 real 0m0.001s 00:02:59.100 user 0m0.000s 00:02:59.100 sys 0m0.000s 00:02:59.100 08:45:05 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:59.100 08:45:05 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:59.100 ************************************ 00:02:59.100 END TEST asan 00:02:59.100 ************************************ 00:02:59.100 08:45:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:59.100 08:45:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:59.100 08:45:06 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:59.100 08:45:06 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:59.100 08:45:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.100 ************************************ 00:02:59.100 START TEST ubsan 00:02:59.100 ************************************ 00:02:59.100 using ubsan 00:02:59.100 08:45:06 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:59.100 00:02:59.100 real 0m0.000s 00:02:59.100 user 0m0.000s 00:02:59.100 sys 0m0.000s 00:02:59.100 08:45:06 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:59.100 08:45:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:59.100 ************************************ 00:02:59.100 END TEST ubsan 00:02:59.100 ************************************ 00:02:59.100 08:45:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:59.100 08:45:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:59.100 08:45:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:59.100 08:45:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:59.100 08:45:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:59.100 08:45:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:59.100 08:45:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:59.100 08:45:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:59.100 08:45:06 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:02:59.359 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:59.359 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:59.616 Using 'verbs' RDMA provider 00:03:15.891 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:30.844 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:30.844 Creating mk/config.mk...done. 00:03:30.844 Creating mk/cc.flags.mk...done. 00:03:30.844 Type 'make' to build. 00:03:30.844 08:45:37 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:30.844 08:45:37 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:30.844 08:45:37 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:30.844 08:45:37 -- common/autotest_common.sh@10 -- $ set +x 00:03:30.844 ************************************ 00:03:30.844 START TEST make 00:03:30.844 ************************************ 00:03:30.844 08:45:37 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:30.844 make[1]: Nothing to be done for 'all'. 00:03:43.047 The Meson build system 00:03:43.048 Version: 1.3.1 00:03:43.048 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:43.048 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:43.048 Build type: native build 00:03:43.048 Program cat found: YES (/usr/bin/cat) 00:03:43.048 Project name: DPDK 00:03:43.048 Project version: 24.03.0 00:03:43.048 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:43.048 C linker for the host machine: cc ld.bfd 2.39-16 00:03:43.048 Host machine cpu family: x86_64 00:03:43.048 Host machine cpu: x86_64 00:03:43.048 Message: ## Building in Developer Mode ## 00:03:43.048 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:43.048 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:43.048 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:43.048 Program python3 found: YES (/usr/bin/python3) 00:03:43.048 Program cat found: YES (/usr/bin/cat) 00:03:43.048 Compiler for C supports arguments -march=native: YES 00:03:43.048 Checking for size of "void *" : 8 00:03:43.048 Checking for size of "void *" : 8 (cached) 00:03:43.048 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:43.048 Library m found: YES 00:03:43.048 Library numa found: YES 00:03:43.048 Has header "numaif.h" : YES 00:03:43.048 Library fdt found: NO 00:03:43.048 Library execinfo found: NO 00:03:43.048 Has header "execinfo.h" : YES 00:03:43.048 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:43.048 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:43.048 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:43.048 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:43.048 Run-time dependency openssl found: YES 3.0.9 00:03:43.048 Run-time dependency libpcap found: YES 1.10.4 00:03:43.048 Has header "pcap.h" with dependency libpcap: YES 00:03:43.048 Compiler for C supports arguments -Wcast-qual: YES 00:03:43.048 Compiler for C supports arguments -Wdeprecated: YES 00:03:43.048 Compiler for C supports arguments -Wformat: YES 00:03:43.048 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:43.048 Compiler for C supports arguments -Wformat-security: NO 00:03:43.048 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:43.048 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:43.048 Compiler for C supports arguments -Wnested-externs: YES 00:03:43.048 Compiler for C supports arguments -Wold-style-definition: YES 00:03:43.048 Compiler for C supports arguments -Wpointer-arith: YES 00:03:43.048 Compiler for C supports arguments -Wsign-compare: YES 00:03:43.048 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:43.048 Compiler for C supports arguments -Wundef: YES 00:03:43.048 Compiler for C supports arguments -Wwrite-strings: YES 00:03:43.048 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:43.048 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:43.048 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:43.048 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:43.048 Program objdump found: YES (/usr/bin/objdump) 00:03:43.048 Compiler for C supports arguments -mavx512f: YES 00:03:43.048 Checking if "AVX512 checking" compiles: YES 00:03:43.048 Fetching value of define "__SSE4_2__" : 1 00:03:43.048 Fetching value of define "__AES__" : 1 00:03:43.048 Fetching value of define "__AVX__" : 1 00:03:43.048 Fetching value of define "__AVX2__" : 1 00:03:43.048 Fetching value of define "__AVX512BW__" : 1 00:03:43.048 Fetching value of define "__AVX512CD__" : 1 00:03:43.048 Fetching value of define "__AVX512DQ__" : 1 00:03:43.048 Fetching value of define "__AVX512F__" : 1 00:03:43.048 Fetching value of define "__AVX512VL__" : 1 00:03:43.048 Fetching value of define "__PCLMUL__" : 1 00:03:43.048 Fetching value of define "__RDRND__" : 1 00:03:43.048 Fetching value of define "__RDSEED__" : 1 00:03:43.048 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:43.048 Fetching value of define "__znver1__" : (undefined) 00:03:43.048 Fetching value of define "__znver2__" : (undefined) 00:03:43.048 Fetching value of define "__znver3__" : (undefined) 00:03:43.048 Fetching value of define "__znver4__" : (undefined) 00:03:43.048 Library asan found: YES 00:03:43.048 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:43.048 Message: lib/log: Defining dependency "log" 00:03:43.048 Message: lib/kvargs: Defining dependency "kvargs" 00:03:43.048 Message: lib/telemetry: Defining dependency "telemetry" 00:03:43.048 Library rt found: YES 00:03:43.048 Checking for function "getentropy" : NO 00:03:43.048 Message: lib/eal: Defining dependency "eal" 00:03:43.048 Message: lib/ring: Defining dependency "ring" 00:03:43.048 Message: lib/rcu: Defining dependency "rcu" 00:03:43.048 Message: lib/mempool: Defining dependency "mempool" 00:03:43.048 Message: lib/mbuf: Defining dependency "mbuf" 00:03:43.048 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:43.048 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:43.048 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:43.048 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:43.048 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:43.048 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:43.048 Compiler for C supports arguments -mpclmul: YES 00:03:43.048 Compiler for C supports arguments -maes: YES 00:03:43.048 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:43.048 Compiler for C supports arguments -mavx512bw: YES 00:03:43.048 Compiler for C supports arguments -mavx512dq: YES 00:03:43.048 Compiler for C supports arguments -mavx512vl: YES 00:03:43.048 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:43.048 Compiler for C supports arguments -mavx2: YES 00:03:43.048 Compiler for C supports arguments -mavx: YES 00:03:43.048 Message: lib/net: Defining dependency "net" 00:03:43.048 Message: lib/meter: Defining dependency "meter" 00:03:43.048 Message: lib/ethdev: Defining dependency "ethdev" 00:03:43.048 Message: lib/pci: Defining dependency "pci" 00:03:43.048 Message: lib/cmdline: Defining dependency "cmdline" 00:03:43.048 Message: lib/hash: Defining dependency "hash" 00:03:43.048 Message: lib/timer: Defining dependency "timer" 00:03:43.048 Message: lib/compressdev: Defining dependency "compressdev" 00:03:43.048 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:43.048 Message: lib/dmadev: Defining dependency "dmadev" 00:03:43.048 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:43.048 Message: lib/power: Defining dependency "power" 00:03:43.048 Message: lib/reorder: Defining dependency "reorder" 00:03:43.048 Message: lib/security: Defining dependency "security" 00:03:43.048 Has header "linux/userfaultfd.h" : YES 00:03:43.048 Has header "linux/vduse.h" : YES 00:03:43.048 Message: lib/vhost: Defining dependency "vhost" 00:03:43.048 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:43.048 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:43.048 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:43.048 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:43.048 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:43.048 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:43.048 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:43.048 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:43.048 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:43.048 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:43.048 Program doxygen found: YES (/usr/bin/doxygen) 00:03:43.048 Configuring doxy-api-html.conf using configuration 00:03:43.048 Configuring doxy-api-man.conf using configuration 00:03:43.048 Program mandb found: YES (/usr/bin/mandb) 00:03:43.048 Program sphinx-build found: NO 00:03:43.048 Configuring rte_build_config.h using configuration 00:03:43.048 Message: 00:03:43.048 ================= 00:03:43.048 Applications Enabled 00:03:43.048 ================= 00:03:43.048 00:03:43.048 apps: 00:03:43.048 00:03:43.048 00:03:43.048 Message: 00:03:43.048 ================= 00:03:43.048 Libraries Enabled 00:03:43.048 ================= 00:03:43.048 00:03:43.048 libs: 00:03:43.048 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:43.048 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:43.048 cryptodev, dmadev, power, reorder, security, vhost, 00:03:43.048 00:03:43.048 Message: 00:03:43.049 =============== 00:03:43.049 Drivers Enabled 00:03:43.049 =============== 00:03:43.049 00:03:43.049 common: 00:03:43.049 00:03:43.049 bus: 00:03:43.049 pci, vdev, 00:03:43.049 mempool: 00:03:43.049 ring, 00:03:43.049 dma: 00:03:43.049 00:03:43.049 net: 00:03:43.049 00:03:43.049 crypto: 00:03:43.049 00:03:43.049 compress: 00:03:43.049 00:03:43.049 vdpa: 00:03:43.049 00:03:43.049 00:03:43.049 Message: 00:03:43.049 ================= 00:03:43.049 Content Skipped 00:03:43.049 ================= 00:03:43.049 00:03:43.049 apps: 00:03:43.049 dumpcap: explicitly disabled via build config 00:03:43.049 graph: explicitly disabled via build config 00:03:43.049 pdump: explicitly disabled via build config 00:03:43.049 proc-info: explicitly disabled via build config 00:03:43.049 test-acl: explicitly disabled via build config 00:03:43.049 test-bbdev: explicitly disabled via build config 00:03:43.049 test-cmdline: explicitly disabled via build config 00:03:43.049 test-compress-perf: explicitly disabled via build config 00:03:43.049 test-crypto-perf: explicitly disabled via build config 00:03:43.049 test-dma-perf: explicitly disabled via build config 00:03:43.049 test-eventdev: explicitly disabled via build config 00:03:43.049 test-fib: explicitly disabled via build config 00:03:43.049 test-flow-perf: explicitly disabled via build config 00:03:43.049 test-gpudev: explicitly disabled via build config 00:03:43.049 test-mldev: explicitly disabled via build config 00:03:43.049 test-pipeline: explicitly disabled via build config 00:03:43.049 test-pmd: explicitly disabled via build config 00:03:43.049 test-regex: explicitly disabled via build config 00:03:43.049 test-sad: explicitly disabled via build config 00:03:43.049 test-security-perf: explicitly disabled via build config 00:03:43.049 00:03:43.049 libs: 00:03:43.049 argparse: explicitly disabled via build config 00:03:43.049 metrics: explicitly disabled via build config 00:03:43.049 acl: explicitly disabled via build config 00:03:43.049 bbdev: explicitly disabled via build config 00:03:43.049 bitratestats: explicitly disabled via build config 00:03:43.049 bpf: explicitly disabled via build config 00:03:43.049 cfgfile: explicitly disabled via build config 00:03:43.049 distributor: explicitly disabled via build config 00:03:43.049 efd: explicitly disabled via build config 00:03:43.049 eventdev: explicitly disabled via build config 00:03:43.049 dispatcher: explicitly disabled via build config 00:03:43.049 gpudev: explicitly disabled via build config 00:03:43.049 gro: explicitly disabled via build config 00:03:43.049 gso: explicitly disabled via build config 00:03:43.049 ip_frag: explicitly disabled via build config 00:03:43.049 jobstats: explicitly disabled via build config 00:03:43.049 latencystats: explicitly disabled via build config 00:03:43.049 lpm: explicitly disabled via build config 00:03:43.049 member: explicitly disabled via build config 00:03:43.049 pcapng: explicitly disabled via build config 00:03:43.049 rawdev: explicitly disabled via build config 00:03:43.049 regexdev: explicitly disabled via build config 00:03:43.049 mldev: explicitly disabled via build config 00:03:43.049 rib: explicitly disabled via build config 00:03:43.049 sched: explicitly disabled via build config 00:03:43.049 stack: explicitly disabled via build config 00:03:43.049 ipsec: explicitly disabled via build config 00:03:43.049 pdcp: explicitly disabled via build config 00:03:43.049 fib: explicitly disabled via build config 00:03:43.049 port: explicitly disabled via build config 00:03:43.049 pdump: explicitly disabled via build config 00:03:43.049 table: explicitly disabled via build config 00:03:43.049 pipeline: explicitly disabled via build config 00:03:43.049 graph: explicitly disabled via build config 00:03:43.049 node: explicitly disabled via build config 00:03:43.049 00:03:43.049 drivers: 00:03:43.049 common/cpt: not in enabled drivers build config 00:03:43.049 common/dpaax: not in enabled drivers build config 00:03:43.049 common/iavf: not in enabled drivers build config 00:03:43.049 common/idpf: not in enabled drivers build config 00:03:43.049 common/ionic: not in enabled drivers build config 00:03:43.049 common/mvep: not in enabled drivers build config 00:03:43.049 common/octeontx: not in enabled drivers build config 00:03:43.049 bus/auxiliary: not in enabled drivers build config 00:03:43.049 bus/cdx: not in enabled drivers build config 00:03:43.049 bus/dpaa: not in enabled drivers build config 00:03:43.049 bus/fslmc: not in enabled drivers build config 00:03:43.049 bus/ifpga: not in enabled drivers build config 00:03:43.049 bus/platform: not in enabled drivers build config 00:03:43.049 bus/uacce: not in enabled drivers build config 00:03:43.049 bus/vmbus: not in enabled drivers build config 00:03:43.049 common/cnxk: not in enabled drivers build config 00:03:43.049 common/mlx5: not in enabled drivers build config 00:03:43.049 common/nfp: not in enabled drivers build config 00:03:43.049 common/nitrox: not in enabled drivers build config 00:03:43.049 common/qat: not in enabled drivers build config 00:03:43.049 common/sfc_efx: not in enabled drivers build config 00:03:43.049 mempool/bucket: not in enabled drivers build config 00:03:43.049 mempool/cnxk: not in enabled drivers build config 00:03:43.049 mempool/dpaa: not in enabled drivers build config 00:03:43.049 mempool/dpaa2: not in enabled drivers build config 00:03:43.049 mempool/octeontx: not in enabled drivers build config 00:03:43.049 mempool/stack: not in enabled drivers build config 00:03:43.049 dma/cnxk: not in enabled drivers build config 00:03:43.049 dma/dpaa: not in enabled drivers build config 00:03:43.049 dma/dpaa2: not in enabled drivers build config 00:03:43.049 dma/hisilicon: not in enabled drivers build config 00:03:43.049 dma/idxd: not in enabled drivers build config 00:03:43.049 dma/ioat: not in enabled drivers build config 00:03:43.049 dma/skeleton: not in enabled drivers build config 00:03:43.049 net/af_packet: not in enabled drivers build config 00:03:43.049 net/af_xdp: not in enabled drivers build config 00:03:43.049 net/ark: not in enabled drivers build config 00:03:43.049 net/atlantic: not in enabled drivers build config 00:03:43.049 net/avp: not in enabled drivers build config 00:03:43.049 net/axgbe: not in enabled drivers build config 00:03:43.049 net/bnx2x: not in enabled drivers build config 00:03:43.049 net/bnxt: not in enabled drivers build config 00:03:43.049 net/bonding: not in enabled drivers build config 00:03:43.049 net/cnxk: not in enabled drivers build config 00:03:43.049 net/cpfl: not in enabled drivers build config 00:03:43.049 net/cxgbe: not in enabled drivers build config 00:03:43.049 net/dpaa: not in enabled drivers build config 00:03:43.049 net/dpaa2: not in enabled drivers build config 00:03:43.049 net/e1000: not in enabled drivers build config 00:03:43.049 net/ena: not in enabled drivers build config 00:03:43.049 net/enetc: not in enabled drivers build config 00:03:43.049 net/enetfec: not in enabled drivers build config 00:03:43.049 net/enic: not in enabled drivers build config 00:03:43.049 net/failsafe: not in enabled drivers build config 00:03:43.049 net/fm10k: not in enabled drivers build config 00:03:43.049 net/gve: not in enabled drivers build config 00:03:43.049 net/hinic: not in enabled drivers build config 00:03:43.049 net/hns3: not in enabled drivers build config 00:03:43.049 net/i40e: not in enabled drivers build config 00:03:43.049 net/iavf: not in enabled drivers build config 00:03:43.049 net/ice: not in enabled drivers build config 00:03:43.049 net/idpf: not in enabled drivers build config 00:03:43.049 net/igc: not in enabled drivers build config 00:03:43.049 net/ionic: not in enabled drivers build config 00:03:43.049 net/ipn3ke: not in enabled drivers build config 00:03:43.049 net/ixgbe: not in enabled drivers build config 00:03:43.049 net/mana: not in enabled drivers build config 00:03:43.049 net/memif: not in enabled drivers build config 00:03:43.049 net/mlx4: not in enabled drivers build config 00:03:43.049 net/mlx5: not in enabled drivers build config 00:03:43.049 net/mvneta: not in enabled drivers build config 00:03:43.049 net/mvpp2: not in enabled drivers build config 00:03:43.049 net/netvsc: not in enabled drivers build config 00:03:43.049 net/nfb: not in enabled drivers build config 00:03:43.049 net/nfp: not in enabled drivers build config 00:03:43.049 net/ngbe: not in enabled drivers build config 00:03:43.049 net/null: not in enabled drivers build config 00:03:43.049 net/octeontx: not in enabled drivers build config 00:03:43.049 net/octeon_ep: not in enabled drivers build config 00:03:43.049 net/pcap: not in enabled drivers build config 00:03:43.049 net/pfe: not in enabled drivers build config 00:03:43.049 net/qede: not in enabled drivers build config 00:03:43.049 net/ring: not in enabled drivers build config 00:03:43.049 net/sfc: not in enabled drivers build config 00:03:43.049 net/softnic: not in enabled drivers build config 00:03:43.049 net/tap: not in enabled drivers build config 00:03:43.049 net/thunderx: not in enabled drivers build config 00:03:43.049 net/txgbe: not in enabled drivers build config 00:03:43.049 net/vdev_netvsc: not in enabled drivers build config 00:03:43.049 net/vhost: not in enabled drivers build config 00:03:43.049 net/virtio: not in enabled drivers build config 00:03:43.049 net/vmxnet3: not in enabled drivers build config 00:03:43.049 raw/*: missing internal dependency, "rawdev" 00:03:43.050 crypto/armv8: not in enabled drivers build config 00:03:43.050 crypto/bcmfs: not in enabled drivers build config 00:03:43.050 crypto/caam_jr: not in enabled drivers build config 00:03:43.050 crypto/ccp: not in enabled drivers build config 00:03:43.050 crypto/cnxk: not in enabled drivers build config 00:03:43.050 crypto/dpaa_sec: not in enabled drivers build config 00:03:43.050 crypto/dpaa2_sec: not in enabled drivers build config 00:03:43.050 crypto/ipsec_mb: not in enabled drivers build config 00:03:43.050 crypto/mlx5: not in enabled drivers build config 00:03:43.050 crypto/mvsam: not in enabled drivers build config 00:03:43.050 crypto/nitrox: not in enabled drivers build config 00:03:43.050 crypto/null: not in enabled drivers build config 00:03:43.050 crypto/octeontx: not in enabled drivers build config 00:03:43.050 crypto/openssl: not in enabled drivers build config 00:03:43.050 crypto/scheduler: not in enabled drivers build config 00:03:43.050 crypto/uadk: not in enabled drivers build config 00:03:43.050 crypto/virtio: not in enabled drivers build config 00:03:43.050 compress/isal: not in enabled drivers build config 00:03:43.050 compress/mlx5: not in enabled drivers build config 00:03:43.050 compress/nitrox: not in enabled drivers build config 00:03:43.050 compress/octeontx: not in enabled drivers build config 00:03:43.050 compress/zlib: not in enabled drivers build config 00:03:43.050 regex/*: missing internal dependency, "regexdev" 00:03:43.050 ml/*: missing internal dependency, "mldev" 00:03:43.050 vdpa/ifc: not in enabled drivers build config 00:03:43.050 vdpa/mlx5: not in enabled drivers build config 00:03:43.050 vdpa/nfp: not in enabled drivers build config 00:03:43.050 vdpa/sfc: not in enabled drivers build config 00:03:43.050 event/*: missing internal dependency, "eventdev" 00:03:43.050 baseband/*: missing internal dependency, "bbdev" 00:03:43.050 gpu/*: missing internal dependency, "gpudev" 00:03:43.050 00:03:43.050 00:03:43.050 Build targets in project: 85 00:03:43.050 00:03:43.050 DPDK 24.03.0 00:03:43.050 00:03:43.050 User defined options 00:03:43.050 buildtype : debug 00:03:43.050 default_library : shared 00:03:43.050 libdir : lib 00:03:43.050 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:43.050 b_sanitize : address 00:03:43.050 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:43.050 c_link_args : 00:03:43.050 cpu_instruction_set: native 00:03:43.050 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:43.050 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:43.050 enable_docs : false 00:03:43.050 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:43.050 enable_kmods : false 00:03:43.050 max_lcores : 128 00:03:43.050 tests : false 00:03:43.050 00:03:43.050 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:43.050 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:43.050 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:43.050 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:43.050 [3/268] Linking static target lib/librte_kvargs.a 00:03:43.050 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:43.050 [5/268] Linking static target lib/librte_log.a 00:03:43.050 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:43.050 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:43.050 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:43.050 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.050 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:43.323 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:43.323 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:43.323 [13/268] Linking static target lib/librte_telemetry.a 00:03:43.323 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:43.323 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:43.323 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:43.583 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:43.583 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:43.843 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:43.843 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.843 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:43.843 [22/268] Linking target lib/librte_log.so.24.1 00:03:43.843 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:43.843 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:43.843 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:43.843 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:43.843 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:44.101 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:44.101 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:44.101 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.101 [31/268] Linking target lib/librte_kvargs.so.24.1 00:03:44.359 [32/268] Linking target lib/librte_telemetry.so.24.1 00:03:44.359 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:44.359 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:44.616 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:44.616 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:44.616 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:44.616 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:44.616 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:44.616 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:44.616 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:44.874 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:44.874 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:44.874 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:44.874 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:45.132 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:45.132 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:45.132 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:45.389 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:45.389 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:45.389 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:45.648 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:45.648 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:45.648 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:45.648 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:45.648 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:45.907 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:45.907 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:45.907 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:45.907 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:45.907 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:45.907 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:46.166 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:46.166 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:46.166 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:46.166 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:46.166 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:46.751 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:46.751 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:46.751 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:46.751 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:46.751 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:46.751 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:46.751 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:46.751 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:46.751 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:46.751 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:47.009 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:47.009 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:47.009 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:47.009 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:47.268 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:47.268 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:47.531 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:47.531 [85/268] Linking static target lib/librte_ring.a 00:03:47.531 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:47.531 [87/268] Linking static target lib/librte_eal.a 00:03:47.795 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:47.795 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:47.795 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:47.795 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:47.795 [92/268] Linking static target lib/librte_mempool.a 00:03:47.795 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:47.795 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:47.795 [95/268] Linking static target lib/librte_rcu.a 00:03:47.795 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:48.052 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.310 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:48.311 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:48.311 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:48.311 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.311 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:48.311 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:48.311 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:48.311 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:48.569 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:48.569 [107/268] Linking static target lib/librte_net.a 00:03:48.569 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:48.569 [109/268] Linking static target lib/librte_meter.a 00:03:48.827 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:48.827 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:48.827 [112/268] Linking static target lib/librte_mbuf.a 00:03:48.827 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:48.827 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.827 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:49.086 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.086 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:49.086 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.344 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:49.603 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:49.603 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:49.603 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:49.861 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:49.861 [124/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.861 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:49.861 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:50.119 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:50.119 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:50.119 [129/268] Linking static target lib/librte_pci.a 00:03:50.119 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:50.119 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:50.119 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:50.119 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:50.379 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:50.379 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:50.379 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:50.379 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:50.379 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:50.379 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:50.379 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:50.379 [141/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.379 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:50.379 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:50.379 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:50.379 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:50.637 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:50.637 [147/268] Linking static target lib/librte_cmdline.a 00:03:50.896 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:50.896 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:50.896 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:50.896 [151/268] Linking static target lib/librte_timer.a 00:03:50.896 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:51.156 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:51.415 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:51.415 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:51.415 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:51.415 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.415 [158/268] Linking static target lib/librte_ethdev.a 00:03:51.415 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:51.415 [160/268] Linking static target lib/librte_compressdev.a 00:03:51.674 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:51.674 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:51.674 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:51.674 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:51.674 [165/268] Linking static target lib/librte_hash.a 00:03:51.934 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:51.934 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:51.934 [168/268] Linking static target lib/librte_dmadev.a 00:03:52.194 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:52.194 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.194 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:52.194 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:52.194 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:52.453 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.453 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:52.711 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:52.711 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:52.711 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:52.711 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:52.711 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.711 [181/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.968 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:52.968 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:53.226 [184/268] Linking static target lib/librte_power.a 00:03:53.226 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:53.226 [186/268] Linking static target lib/librte_cryptodev.a 00:03:53.226 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:53.226 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:53.226 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:53.226 [190/268] Linking static target lib/librte_reorder.a 00:03:53.484 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:53.484 [192/268] Linking static target lib/librte_security.a 00:03:53.484 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:53.741 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.999 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:53.999 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.999 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.257 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:54.257 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:54.257 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:54.515 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:54.515 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:54.515 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:54.774 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:54.774 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:54.774 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:55.032 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:55.032 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:55.032 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:55.032 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:55.032 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.032 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:55.032 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:55.032 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:55.291 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:55.291 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:55.291 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:55.291 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:55.291 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:55.291 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:55.291 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:55.291 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.550 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:55.550 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:55.550 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:55.550 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:55.550 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.488 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:57.424 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.683 [230/268] Linking target lib/librte_eal.so.24.1 00:03:57.683 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:57.942 [232/268] Linking target lib/librte_pci.so.24.1 00:03:57.942 [233/268] Linking target lib/librte_ring.so.24.1 00:03:57.942 [234/268] Linking target lib/librte_timer.so.24.1 00:03:57.942 [235/268] Linking target lib/librte_dmadev.so.24.1 00:03:57.942 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:57.942 [237/268] Linking target lib/librte_meter.so.24.1 00:03:57.942 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:57.942 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:57.942 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:57.942 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:57.942 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:57.942 [243/268] Linking target lib/librte_rcu.so.24.1 00:03:57.942 [244/268] Linking target lib/librte_mempool.so.24.1 00:03:57.942 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:58.201 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:58.201 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:58.201 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:58.201 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:58.460 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:58.460 [251/268] Linking target lib/librte_net.so.24.1 00:03:58.460 [252/268] Linking target lib/librte_compressdev.so.24.1 00:03:58.460 [253/268] Linking target lib/librte_reorder.so.24.1 00:03:58.460 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:58.719 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:58.719 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:58.719 [257/268] Linking target lib/librte_hash.so.24.1 00:03:58.719 [258/268] Linking target lib/librte_security.so.24.1 00:03:58.719 [259/268] Linking target lib/librte_cmdline.so.24.1 00:03:58.719 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:00.127 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.127 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:00.127 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:00.386 [264/268] Linking target lib/librte_power.so.24.1 00:04:00.386 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:00.386 [266/268] Linking static target lib/librte_vhost.a 00:04:02.922 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.922 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:02.922 INFO: autodetecting backend as ninja 00:04:02.922 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:04.301 CC lib/ut/ut.o 00:04:04.301 CC lib/ut_mock/mock.o 00:04:04.301 CC lib/log/log.o 00:04:04.301 CC lib/log/log_deprecated.o 00:04:04.301 CC lib/log/log_flags.o 00:04:04.301 LIB libspdk_ut.a 00:04:04.301 LIB libspdk_ut_mock.a 00:04:04.301 LIB libspdk_log.a 00:04:04.301 SO libspdk_ut.so.2.0 00:04:04.301 SO libspdk_ut_mock.so.6.0 00:04:04.301 SO libspdk_log.so.7.0 00:04:04.301 SYMLINK libspdk_ut.so 00:04:04.560 SYMLINK libspdk_ut_mock.so 00:04:04.560 SYMLINK libspdk_log.so 00:04:04.560 CC lib/util/base64.o 00:04:04.560 CC lib/util/bit_array.o 00:04:04.560 CC lib/util/cpuset.o 00:04:04.560 CC lib/util/crc16.o 00:04:04.560 CC lib/util/crc32c.o 00:04:04.560 CC lib/util/crc32.o 00:04:04.560 CXX lib/trace_parser/trace.o 00:04:04.560 CC lib/dma/dma.o 00:04:04.560 CC lib/ioat/ioat.o 00:04:04.819 CC lib/vfio_user/host/vfio_user_pci.o 00:04:04.819 CC lib/util/crc32_ieee.o 00:04:04.819 CC lib/util/crc64.o 00:04:04.819 CC lib/vfio_user/host/vfio_user.o 00:04:04.819 CC lib/util/dif.o 00:04:04.819 LIB libspdk_dma.a 00:04:04.819 CC lib/util/fd.o 00:04:04.819 CC lib/util/fd_group.o 00:04:04.819 SO libspdk_dma.so.4.0 00:04:04.819 CC lib/util/file.o 00:04:04.819 CC lib/util/hexlify.o 00:04:05.078 SYMLINK libspdk_dma.so 00:04:05.078 CC lib/util/iov.o 00:04:05.078 LIB libspdk_ioat.a 00:04:05.078 CC lib/util/math.o 00:04:05.078 SO libspdk_ioat.so.7.0 00:04:05.078 CC lib/util/net.o 00:04:05.078 CC lib/util/pipe.o 00:04:05.078 LIB libspdk_vfio_user.a 00:04:05.078 SYMLINK libspdk_ioat.so 00:04:05.078 CC lib/util/strerror_tls.o 00:04:05.078 CC lib/util/string.o 00:04:05.078 SO libspdk_vfio_user.so.5.0 00:04:05.078 CC lib/util/uuid.o 00:04:05.078 CC lib/util/xor.o 00:04:05.078 CC lib/util/zipf.o 00:04:05.078 SYMLINK libspdk_vfio_user.so 00:04:05.337 LIB libspdk_util.a 00:04:05.595 SO libspdk_util.so.10.0 00:04:05.595 LIB libspdk_trace_parser.a 00:04:05.595 SO libspdk_trace_parser.so.5.0 00:04:05.853 SYMLINK libspdk_util.so 00:04:05.853 SYMLINK libspdk_trace_parser.so 00:04:05.853 CC lib/env_dpdk/env.o 00:04:05.853 CC lib/rdma_provider/common.o 00:04:05.853 CC lib/env_dpdk/memory.o 00:04:05.853 CC lib/json/json_parse.o 00:04:05.853 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:05.853 CC lib/env_dpdk/pci.o 00:04:05.853 CC lib/conf/conf.o 00:04:05.853 CC lib/vmd/vmd.o 00:04:05.853 CC lib/rdma_utils/rdma_utils.o 00:04:05.853 CC lib/idxd/idxd.o 00:04:06.111 CC lib/env_dpdk/init.o 00:04:06.111 LIB libspdk_rdma_provider.a 00:04:06.111 SO libspdk_rdma_provider.so.6.0 00:04:06.111 LIB libspdk_conf.a 00:04:06.111 CC lib/json/json_util.o 00:04:06.111 SO libspdk_conf.so.6.0 00:04:06.111 LIB libspdk_rdma_utils.a 00:04:06.111 SYMLINK libspdk_rdma_provider.so 00:04:06.111 CC lib/json/json_write.o 00:04:06.111 SO libspdk_rdma_utils.so.1.0 00:04:06.111 SYMLINK libspdk_conf.so 00:04:06.370 CC lib/env_dpdk/threads.o 00:04:06.370 SYMLINK libspdk_rdma_utils.so 00:04:06.370 CC lib/env_dpdk/pci_ioat.o 00:04:06.370 CC lib/env_dpdk/pci_virtio.o 00:04:06.370 CC lib/env_dpdk/pci_vmd.o 00:04:06.370 CC lib/env_dpdk/pci_idxd.o 00:04:06.370 CC lib/env_dpdk/pci_event.o 00:04:06.370 CC lib/env_dpdk/sigbus_handler.o 00:04:06.370 CC lib/env_dpdk/pci_dpdk.o 00:04:06.627 LIB libspdk_json.a 00:04:06.627 CC lib/vmd/led.o 00:04:06.627 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:06.627 SO libspdk_json.so.6.0 00:04:06.627 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:06.627 CC lib/idxd/idxd_user.o 00:04:06.627 CC lib/idxd/idxd_kernel.o 00:04:06.627 SYMLINK libspdk_json.so 00:04:06.627 LIB libspdk_vmd.a 00:04:06.627 SO libspdk_vmd.so.6.0 00:04:06.886 SYMLINK libspdk_vmd.so 00:04:06.886 CC lib/jsonrpc/jsonrpc_server.o 00:04:06.886 CC lib/jsonrpc/jsonrpc_client.o 00:04:06.886 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:06.886 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:06.886 LIB libspdk_idxd.a 00:04:06.886 SO libspdk_idxd.so.12.0 00:04:07.145 SYMLINK libspdk_idxd.so 00:04:07.145 LIB libspdk_jsonrpc.a 00:04:07.145 SO libspdk_jsonrpc.so.6.0 00:04:07.408 SYMLINK libspdk_jsonrpc.so 00:04:07.674 LIB libspdk_env_dpdk.a 00:04:07.674 SO libspdk_env_dpdk.so.15.0 00:04:07.674 CC lib/rpc/rpc.o 00:04:07.933 SYMLINK libspdk_env_dpdk.so 00:04:07.934 LIB libspdk_rpc.a 00:04:07.934 SO libspdk_rpc.so.6.0 00:04:07.934 SYMLINK libspdk_rpc.so 00:04:08.502 CC lib/trace/trace.o 00:04:08.502 CC lib/trace/trace_flags.o 00:04:08.502 CC lib/trace/trace_rpc.o 00:04:08.502 CC lib/notify/notify.o 00:04:08.502 CC lib/notify/notify_rpc.o 00:04:08.502 CC lib/keyring/keyring_rpc.o 00:04:08.502 CC lib/keyring/keyring.o 00:04:08.502 LIB libspdk_notify.a 00:04:08.502 SO libspdk_notify.so.6.0 00:04:08.761 SYMLINK libspdk_notify.so 00:04:08.761 LIB libspdk_keyring.a 00:04:08.761 LIB libspdk_trace.a 00:04:08.761 SO libspdk_keyring.so.1.0 00:04:08.761 SO libspdk_trace.so.10.0 00:04:08.761 SYMLINK libspdk_keyring.so 00:04:08.761 SYMLINK libspdk_trace.so 00:04:09.327 CC lib/sock/sock_rpc.o 00:04:09.327 CC lib/sock/sock.o 00:04:09.327 CC lib/thread/thread.o 00:04:09.327 CC lib/thread/iobuf.o 00:04:09.922 LIB libspdk_sock.a 00:04:09.923 SO libspdk_sock.so.10.0 00:04:09.923 SYMLINK libspdk_sock.so 00:04:10.218 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:10.218 CC lib/nvme/nvme_ctrlr.o 00:04:10.218 CC lib/nvme/nvme_fabric.o 00:04:10.218 CC lib/nvme/nvme_ns_cmd.o 00:04:10.218 CC lib/nvme/nvme_pcie.o 00:04:10.218 CC lib/nvme/nvme_pcie_common.o 00:04:10.218 CC lib/nvme/nvme_ns.o 00:04:10.218 CC lib/nvme/nvme_qpair.o 00:04:10.218 CC lib/nvme/nvme.o 00:04:11.153 CC lib/nvme/nvme_quirks.o 00:04:11.153 CC lib/nvme/nvme_transport.o 00:04:11.153 LIB libspdk_thread.a 00:04:11.153 CC lib/nvme/nvme_discovery.o 00:04:11.153 SO libspdk_thread.so.10.1 00:04:11.153 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:11.153 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:11.153 SYMLINK libspdk_thread.so 00:04:11.153 CC lib/nvme/nvme_tcp.o 00:04:11.153 CC lib/nvme/nvme_opal.o 00:04:11.411 CC lib/nvme/nvme_io_msg.o 00:04:11.669 CC lib/nvme/nvme_poll_group.o 00:04:11.669 CC lib/nvme/nvme_zns.o 00:04:11.927 CC lib/nvme/nvme_stubs.o 00:04:11.927 CC lib/blob/blobstore.o 00:04:11.927 CC lib/accel/accel.o 00:04:11.927 CC lib/init/json_config.o 00:04:11.927 CC lib/virtio/virtio.o 00:04:12.185 CC lib/virtio/virtio_vhost_user.o 00:04:12.185 CC lib/virtio/virtio_vfio_user.o 00:04:12.185 CC lib/init/subsystem.o 00:04:12.449 CC lib/init/subsystem_rpc.o 00:04:12.449 CC lib/accel/accel_rpc.o 00:04:12.449 CC lib/blob/request.o 00:04:12.449 CC lib/accel/accel_sw.o 00:04:12.449 CC lib/blob/zeroes.o 00:04:12.449 CC lib/virtio/virtio_pci.o 00:04:12.449 CC lib/init/rpc.o 00:04:12.705 CC lib/blob/blob_bs_dev.o 00:04:12.705 LIB libspdk_init.a 00:04:12.705 CC lib/nvme/nvme_auth.o 00:04:12.705 SO libspdk_init.so.5.0 00:04:12.705 CC lib/nvme/nvme_cuse.o 00:04:12.705 CC lib/nvme/nvme_rdma.o 00:04:12.705 SYMLINK libspdk_init.so 00:04:12.963 LIB libspdk_virtio.a 00:04:12.963 SO libspdk_virtio.so.7.0 00:04:12.963 SYMLINK libspdk_virtio.so 00:04:12.963 CC lib/event/app.o 00:04:12.963 CC lib/event/reactor.o 00:04:12.963 CC lib/event/log_rpc.o 00:04:12.963 CC lib/event/app_rpc.o 00:04:13.221 CC lib/event/scheduler_static.o 00:04:13.221 LIB libspdk_accel.a 00:04:13.221 SO libspdk_accel.so.16.0 00:04:13.221 SYMLINK libspdk_accel.so 00:04:13.479 CC lib/bdev/bdev.o 00:04:13.479 CC lib/bdev/bdev_rpc.o 00:04:13.479 CC lib/bdev/bdev_zone.o 00:04:13.479 CC lib/bdev/part.o 00:04:13.737 CC lib/bdev/scsi_nvme.o 00:04:13.737 LIB libspdk_event.a 00:04:13.737 SO libspdk_event.so.14.0 00:04:13.737 SYMLINK libspdk_event.so 00:04:14.671 LIB libspdk_nvme.a 00:04:14.671 SO libspdk_nvme.so.13.1 00:04:15.239 SYMLINK libspdk_nvme.so 00:04:16.174 LIB libspdk_blob.a 00:04:16.174 SO libspdk_blob.so.11.0 00:04:16.174 SYMLINK libspdk_blob.so 00:04:16.439 CC lib/blobfs/tree.o 00:04:16.439 CC lib/blobfs/blobfs.o 00:04:16.439 CC lib/lvol/lvol.o 00:04:17.012 LIB libspdk_bdev.a 00:04:17.270 SO libspdk_bdev.so.16.0 00:04:17.270 SYMLINK libspdk_bdev.so 00:04:17.528 CC lib/nvmf/ctrlr_discovery.o 00:04:17.528 CC lib/nvmf/ctrlr.o 00:04:17.528 CC lib/nvmf/subsystem.o 00:04:17.528 CC lib/nvmf/ctrlr_bdev.o 00:04:17.528 CC lib/nbd/nbd.o 00:04:17.528 CC lib/ublk/ublk.o 00:04:17.528 CC lib/scsi/dev.o 00:04:17.528 CC lib/ftl/ftl_core.o 00:04:17.528 LIB libspdk_blobfs.a 00:04:17.528 SO libspdk_blobfs.so.10.0 00:04:17.786 LIB libspdk_lvol.a 00:04:17.786 SYMLINK libspdk_blobfs.so 00:04:17.786 CC lib/ftl/ftl_init.o 00:04:17.786 SO libspdk_lvol.so.10.0 00:04:17.786 CC lib/scsi/lun.o 00:04:17.786 SYMLINK libspdk_lvol.so 00:04:17.786 CC lib/scsi/port.o 00:04:18.044 CC lib/ftl/ftl_layout.o 00:04:18.044 CC lib/ftl/ftl_debug.o 00:04:18.044 CC lib/ftl/ftl_io.o 00:04:18.044 CC lib/nbd/nbd_rpc.o 00:04:18.044 CC lib/scsi/scsi.o 00:04:18.301 CC lib/scsi/scsi_bdev.o 00:04:18.301 LIB libspdk_nbd.a 00:04:18.301 CC lib/ftl/ftl_sb.o 00:04:18.301 SO libspdk_nbd.so.7.0 00:04:18.301 CC lib/scsi/scsi_pr.o 00:04:18.301 SYMLINK libspdk_nbd.so 00:04:18.301 CC lib/ftl/ftl_l2p.o 00:04:18.558 CC lib/nvmf/nvmf.o 00:04:18.558 CC lib/ublk/ublk_rpc.o 00:04:18.558 CC lib/ftl/ftl_l2p_flat.o 00:04:18.558 CC lib/scsi/scsi_rpc.o 00:04:18.558 CC lib/ftl/ftl_nv_cache.o 00:04:18.558 CC lib/nvmf/nvmf_rpc.o 00:04:18.815 CC lib/scsi/task.o 00:04:18.815 CC lib/nvmf/transport.o 00:04:18.815 CC lib/nvmf/tcp.o 00:04:18.815 CC lib/ftl/ftl_band.o 00:04:18.815 LIB libspdk_ublk.a 00:04:18.815 SO libspdk_ublk.so.3.0 00:04:19.073 SYMLINK libspdk_ublk.so 00:04:19.073 CC lib/nvmf/stubs.o 00:04:19.073 LIB libspdk_scsi.a 00:04:19.073 SO libspdk_scsi.so.9.0 00:04:19.330 CC lib/nvmf/mdns_server.o 00:04:19.330 SYMLINK libspdk_scsi.so 00:04:19.330 CC lib/ftl/ftl_band_ops.o 00:04:19.589 CC lib/nvmf/rdma.o 00:04:19.589 CC lib/iscsi/conn.o 00:04:19.589 CC lib/iscsi/init_grp.o 00:04:19.853 CC lib/vhost/vhost.o 00:04:19.853 CC lib/nvmf/auth.o 00:04:19.853 CC lib/iscsi/iscsi.o 00:04:19.853 CC lib/ftl/ftl_writer.o 00:04:19.853 CC lib/iscsi/md5.o 00:04:20.111 CC lib/iscsi/param.o 00:04:20.111 CC lib/ftl/ftl_rq.o 00:04:20.111 CC lib/iscsi/portal_grp.o 00:04:20.370 CC lib/ftl/ftl_reloc.o 00:04:20.370 CC lib/vhost/vhost_rpc.o 00:04:20.370 CC lib/iscsi/tgt_node.o 00:04:20.629 CC lib/vhost/vhost_scsi.o 00:04:20.629 CC lib/ftl/ftl_l2p_cache.o 00:04:20.886 CC lib/iscsi/iscsi_subsystem.o 00:04:20.886 CC lib/ftl/ftl_p2l.o 00:04:20.886 CC lib/ftl/mngt/ftl_mngt.o 00:04:20.886 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:21.144 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:21.144 CC lib/vhost/vhost_blk.o 00:04:21.144 CC lib/vhost/rte_vhost_user.o 00:04:21.401 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:21.401 CC lib/iscsi/iscsi_rpc.o 00:04:21.401 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:21.401 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:21.401 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:21.659 CC lib/iscsi/task.o 00:04:21.659 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:21.659 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:21.659 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:21.659 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:21.917 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:21.917 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:21.917 CC lib/ftl/utils/ftl_conf.o 00:04:21.917 CC lib/ftl/utils/ftl_md.o 00:04:21.917 CC lib/ftl/utils/ftl_mempool.o 00:04:22.174 CC lib/ftl/utils/ftl_bitmap.o 00:04:22.174 CC lib/ftl/utils/ftl_property.o 00:04:22.174 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:22.174 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:22.432 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:22.432 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:22.432 LIB libspdk_vhost.a 00:04:22.432 LIB libspdk_iscsi.a 00:04:22.432 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:22.432 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:22.432 SO libspdk_vhost.so.8.0 00:04:22.432 SO libspdk_iscsi.so.8.0 00:04:22.691 LIB libspdk_nvmf.a 00:04:22.691 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:22.691 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:22.691 SYMLINK libspdk_vhost.so 00:04:22.691 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:22.691 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:22.691 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:22.691 CC lib/ftl/base/ftl_base_dev.o 00:04:22.691 CC lib/ftl/base/ftl_base_bdev.o 00:04:22.691 SYMLINK libspdk_iscsi.so 00:04:22.691 CC lib/ftl/ftl_trace.o 00:04:22.691 SO libspdk_nvmf.so.19.0 00:04:22.950 SYMLINK libspdk_nvmf.so 00:04:23.208 LIB libspdk_ftl.a 00:04:23.467 SO libspdk_ftl.so.9.0 00:04:23.726 SYMLINK libspdk_ftl.so 00:04:24.293 CC module/env_dpdk/env_dpdk_rpc.o 00:04:24.293 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:24.293 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:24.293 CC module/keyring/file/keyring.o 00:04:24.293 CC module/keyring/linux/keyring.o 00:04:24.293 CC module/accel/error/accel_error.o 00:04:24.293 CC module/scheduler/gscheduler/gscheduler.o 00:04:24.293 CC module/blob/bdev/blob_bdev.o 00:04:24.293 CC module/accel/ioat/accel_ioat.o 00:04:24.293 CC module/sock/posix/posix.o 00:04:24.293 LIB libspdk_env_dpdk_rpc.a 00:04:24.293 SO libspdk_env_dpdk_rpc.so.6.0 00:04:24.293 CC module/keyring/file/keyring_rpc.o 00:04:24.293 LIB libspdk_scheduler_gscheduler.a 00:04:24.293 SYMLINK libspdk_env_dpdk_rpc.so 00:04:24.293 CC module/accel/ioat/accel_ioat_rpc.o 00:04:24.293 SO libspdk_scheduler_gscheduler.so.4.0 00:04:24.551 CC module/keyring/linux/keyring_rpc.o 00:04:24.551 LIB libspdk_scheduler_dpdk_governor.a 00:04:24.551 SYMLINK libspdk_scheduler_gscheduler.so 00:04:24.551 CC module/accel/error/accel_error_rpc.o 00:04:24.551 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:24.551 LIB libspdk_keyring_file.a 00:04:24.551 LIB libspdk_blob_bdev.a 00:04:24.551 LIB libspdk_scheduler_dynamic.a 00:04:24.551 LIB libspdk_accel_ioat.a 00:04:24.551 SO libspdk_scheduler_dynamic.so.4.0 00:04:24.551 SO libspdk_keyring_file.so.1.0 00:04:24.551 SO libspdk_blob_bdev.so.11.0 00:04:24.551 LIB libspdk_keyring_linux.a 00:04:24.551 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:24.551 SO libspdk_accel_ioat.so.6.0 00:04:24.551 SO libspdk_keyring_linux.so.1.0 00:04:24.551 SYMLINK libspdk_scheduler_dynamic.so 00:04:24.551 SYMLINK libspdk_keyring_file.so 00:04:24.551 LIB libspdk_accel_error.a 00:04:24.809 SYMLINK libspdk_blob_bdev.so 00:04:24.809 SO libspdk_accel_error.so.2.0 00:04:24.809 SYMLINK libspdk_keyring_linux.so 00:04:24.809 SYMLINK libspdk_accel_ioat.so 00:04:24.809 CC module/accel/iaa/accel_iaa.o 00:04:24.809 CC module/accel/iaa/accel_iaa_rpc.o 00:04:24.809 CC module/accel/dsa/accel_dsa.o 00:04:24.809 SYMLINK libspdk_accel_error.so 00:04:24.809 CC module/accel/dsa/accel_dsa_rpc.o 00:04:25.067 CC module/bdev/lvol/vbdev_lvol.o 00:04:25.067 CC module/bdev/gpt/gpt.o 00:04:25.067 CC module/bdev/delay/vbdev_delay.o 00:04:25.067 CC module/bdev/error/vbdev_error.o 00:04:25.067 CC module/bdev/error/vbdev_error_rpc.o 00:04:25.067 CC module/blobfs/bdev/blobfs_bdev.o 00:04:25.067 LIB libspdk_accel_iaa.a 00:04:25.067 CC module/bdev/malloc/bdev_malloc.o 00:04:25.067 SO libspdk_accel_iaa.so.3.0 00:04:25.325 SYMLINK libspdk_accel_iaa.so 00:04:25.325 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:25.325 LIB libspdk_accel_dsa.a 00:04:25.325 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:25.325 SO libspdk_accel_dsa.so.5.0 00:04:25.325 CC module/bdev/gpt/vbdev_gpt.o 00:04:25.325 LIB libspdk_bdev_error.a 00:04:25.325 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:25.325 SO libspdk_bdev_error.so.6.0 00:04:25.325 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:25.325 LIB libspdk_sock_posix.a 00:04:25.325 SYMLINK libspdk_accel_dsa.so 00:04:25.583 SO libspdk_sock_posix.so.6.0 00:04:25.583 SYMLINK libspdk_bdev_error.so 00:04:25.583 SYMLINK libspdk_sock_posix.so 00:04:25.583 LIB libspdk_blobfs_bdev.a 00:04:25.583 LIB libspdk_bdev_delay.a 00:04:25.583 CC module/bdev/null/bdev_null.o 00:04:25.583 SO libspdk_blobfs_bdev.so.6.0 00:04:25.583 CC module/bdev/nvme/bdev_nvme.o 00:04:25.583 LIB libspdk_bdev_gpt.a 00:04:25.583 SO libspdk_bdev_delay.so.6.0 00:04:25.841 CC module/bdev/passthru/vbdev_passthru.o 00:04:25.841 SO libspdk_bdev_gpt.so.6.0 00:04:25.841 SYMLINK libspdk_blobfs_bdev.so 00:04:25.841 SYMLINK libspdk_bdev_delay.so 00:04:25.841 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:25.841 SYMLINK libspdk_bdev_gpt.so 00:04:25.841 CC module/bdev/raid/bdev_raid.o 00:04:25.841 CC module/bdev/null/bdev_null_rpc.o 00:04:25.841 LIB libspdk_bdev_lvol.a 00:04:25.841 SO libspdk_bdev_lvol.so.6.0 00:04:25.841 LIB libspdk_bdev_malloc.a 00:04:26.099 SO libspdk_bdev_malloc.so.6.0 00:04:26.099 CC module/bdev/split/vbdev_split.o 00:04:26.099 CC module/bdev/split/vbdev_split_rpc.o 00:04:26.099 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:26.099 SYMLINK libspdk_bdev_lvol.so 00:04:26.099 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:26.099 SYMLINK libspdk_bdev_malloc.so 00:04:26.099 CC module/bdev/nvme/nvme_rpc.o 00:04:26.099 LIB libspdk_bdev_null.a 00:04:26.099 SO libspdk_bdev_null.so.6.0 00:04:26.365 CC module/bdev/aio/bdev_aio.o 00:04:26.365 CC module/bdev/aio/bdev_aio_rpc.o 00:04:26.365 LIB libspdk_bdev_passthru.a 00:04:26.365 SYMLINK libspdk_bdev_null.so 00:04:26.365 SO libspdk_bdev_passthru.so.6.0 00:04:26.365 CC module/bdev/nvme/bdev_mdns_client.o 00:04:26.365 LIB libspdk_bdev_split.a 00:04:26.365 SYMLINK libspdk_bdev_passthru.so 00:04:26.365 SO libspdk_bdev_split.so.6.0 00:04:26.365 CC module/bdev/ftl/bdev_ftl.o 00:04:26.628 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:26.628 SYMLINK libspdk_bdev_split.so 00:04:26.628 CC module/bdev/nvme/vbdev_opal.o 00:04:26.628 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:26.885 CC module/bdev/iscsi/bdev_iscsi.o 00:04:26.885 LIB libspdk_bdev_zone_block.a 00:04:26.885 LIB libspdk_bdev_aio.a 00:04:26.885 SO libspdk_bdev_zone_block.so.6.0 00:04:26.885 CC module/bdev/rbd/bdev_rbd.o 00:04:26.885 SO libspdk_bdev_aio.so.6.0 00:04:26.885 SYMLINK libspdk_bdev_zone_block.so 00:04:26.885 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:26.885 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:26.885 SYMLINK libspdk_bdev_aio.so 00:04:26.885 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:27.142 CC module/bdev/rbd/bdev_rbd_rpc.o 00:04:27.142 LIB libspdk_bdev_ftl.a 00:04:27.142 SO libspdk_bdev_ftl.so.6.0 00:04:27.401 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:27.401 SYMLINK libspdk_bdev_ftl.so 00:04:27.401 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:27.401 CC module/bdev/raid/bdev_raid_rpc.o 00:04:27.401 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:27.401 CC module/bdev/raid/bdev_raid_sb.o 00:04:27.401 CC module/bdev/raid/raid0.o 00:04:27.401 CC module/bdev/raid/raid1.o 00:04:27.401 CC module/bdev/raid/concat.o 00:04:27.401 LIB libspdk_bdev_iscsi.a 00:04:27.659 SO libspdk_bdev_iscsi.so.6.0 00:04:27.659 LIB libspdk_bdev_rbd.a 00:04:27.659 SO libspdk_bdev_rbd.so.7.0 00:04:27.659 SYMLINK libspdk_bdev_iscsi.so 00:04:27.659 SYMLINK libspdk_bdev_rbd.so 00:04:27.917 LIB libspdk_bdev_virtio.a 00:04:27.917 SO libspdk_bdev_virtio.so.6.0 00:04:27.917 LIB libspdk_bdev_raid.a 00:04:27.917 SO libspdk_bdev_raid.so.6.0 00:04:27.917 SYMLINK libspdk_bdev_virtio.so 00:04:28.182 SYMLINK libspdk_bdev_raid.so 00:04:28.746 LIB libspdk_bdev_nvme.a 00:04:29.003 SO libspdk_bdev_nvme.so.7.0 00:04:29.003 SYMLINK libspdk_bdev_nvme.so 00:04:29.567 CC module/event/subsystems/scheduler/scheduler.o 00:04:29.567 CC module/event/subsystems/vmd/vmd.o 00:04:29.567 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:29.567 CC module/event/subsystems/sock/sock.o 00:04:29.567 CC module/event/subsystems/keyring/keyring.o 00:04:29.567 CC module/event/subsystems/iobuf/iobuf.o 00:04:29.567 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:29.567 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:29.825 LIB libspdk_event_keyring.a 00:04:29.825 LIB libspdk_event_vhost_blk.a 00:04:29.825 LIB libspdk_event_iobuf.a 00:04:29.825 SO libspdk_event_keyring.so.1.0 00:04:29.825 LIB libspdk_event_scheduler.a 00:04:29.825 SO libspdk_event_vhost_blk.so.3.0 00:04:29.825 LIB libspdk_event_vmd.a 00:04:29.825 SO libspdk_event_scheduler.so.4.0 00:04:29.825 LIB libspdk_event_sock.a 00:04:29.825 SO libspdk_event_iobuf.so.3.0 00:04:29.825 SYMLINK libspdk_event_keyring.so 00:04:29.825 SO libspdk_event_sock.so.5.0 00:04:29.825 SO libspdk_event_vmd.so.6.0 00:04:29.825 SYMLINK libspdk_event_vhost_blk.so 00:04:29.825 SYMLINK libspdk_event_scheduler.so 00:04:29.825 SYMLINK libspdk_event_iobuf.so 00:04:29.825 SYMLINK libspdk_event_sock.so 00:04:29.825 SYMLINK libspdk_event_vmd.so 00:04:30.082 CC module/event/subsystems/accel/accel.o 00:04:30.340 LIB libspdk_event_accel.a 00:04:30.340 SO libspdk_event_accel.so.6.0 00:04:30.340 SYMLINK libspdk_event_accel.so 00:04:30.612 CC module/event/subsystems/bdev/bdev.o 00:04:30.872 LIB libspdk_event_bdev.a 00:04:30.872 SO libspdk_event_bdev.so.6.0 00:04:31.145 SYMLINK libspdk_event_bdev.so 00:04:31.145 CC module/event/subsystems/scsi/scsi.o 00:04:31.145 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:31.145 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:31.145 CC module/event/subsystems/ublk/ublk.o 00:04:31.145 CC module/event/subsystems/nbd/nbd.o 00:04:31.449 LIB libspdk_event_ublk.a 00:04:31.449 LIB libspdk_event_scsi.a 00:04:31.449 LIB libspdk_event_nbd.a 00:04:31.449 SO libspdk_event_ublk.so.3.0 00:04:31.449 SO libspdk_event_scsi.so.6.0 00:04:31.449 SO libspdk_event_nbd.so.6.0 00:04:31.450 SYMLINK libspdk_event_scsi.so 00:04:31.450 SYMLINK libspdk_event_ublk.so 00:04:31.450 LIB libspdk_event_nvmf.a 00:04:31.707 SYMLINK libspdk_event_nbd.so 00:04:31.707 SO libspdk_event_nvmf.so.6.0 00:04:31.707 SYMLINK libspdk_event_nvmf.so 00:04:31.707 CC module/event/subsystems/iscsi/iscsi.o 00:04:31.707 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:31.968 LIB libspdk_event_vhost_scsi.a 00:04:31.968 LIB libspdk_event_iscsi.a 00:04:31.968 SO libspdk_event_vhost_scsi.so.3.0 00:04:31.968 SO libspdk_event_iscsi.so.6.0 00:04:31.968 SYMLINK libspdk_event_vhost_scsi.so 00:04:31.968 SYMLINK libspdk_event_iscsi.so 00:04:32.227 SO libspdk.so.6.0 00:04:32.227 SYMLINK libspdk.so 00:04:32.486 CC test/rpc_client/rpc_client_test.o 00:04:32.486 TEST_HEADER include/spdk/accel.h 00:04:32.486 CC app/trace_record/trace_record.o 00:04:32.486 TEST_HEADER include/spdk/accel_module.h 00:04:32.486 TEST_HEADER include/spdk/assert.h 00:04:32.486 TEST_HEADER include/spdk/barrier.h 00:04:32.486 TEST_HEADER include/spdk/base64.h 00:04:32.486 CXX app/trace/trace.o 00:04:32.486 TEST_HEADER include/spdk/bdev.h 00:04:32.486 TEST_HEADER include/spdk/bdev_module.h 00:04:32.486 TEST_HEADER include/spdk/bdev_zone.h 00:04:32.486 TEST_HEADER include/spdk/bit_array.h 00:04:32.486 TEST_HEADER include/spdk/bit_pool.h 00:04:32.486 TEST_HEADER include/spdk/blob_bdev.h 00:04:32.486 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:32.486 TEST_HEADER include/spdk/blobfs.h 00:04:32.486 TEST_HEADER include/spdk/blob.h 00:04:32.486 TEST_HEADER include/spdk/conf.h 00:04:32.486 TEST_HEADER include/spdk/config.h 00:04:32.486 TEST_HEADER include/spdk/cpuset.h 00:04:32.486 TEST_HEADER include/spdk/crc16.h 00:04:32.486 TEST_HEADER include/spdk/crc32.h 00:04:32.486 TEST_HEADER include/spdk/crc64.h 00:04:32.486 CC app/nvmf_tgt/nvmf_main.o 00:04:32.486 TEST_HEADER include/spdk/dif.h 00:04:32.486 TEST_HEADER include/spdk/dma.h 00:04:32.486 TEST_HEADER include/spdk/endian.h 00:04:32.486 TEST_HEADER include/spdk/env_dpdk.h 00:04:32.486 TEST_HEADER include/spdk/env.h 00:04:32.486 TEST_HEADER include/spdk/event.h 00:04:32.486 TEST_HEADER include/spdk/fd_group.h 00:04:32.486 TEST_HEADER include/spdk/fd.h 00:04:32.486 TEST_HEADER include/spdk/file.h 00:04:32.486 TEST_HEADER include/spdk/ftl.h 00:04:32.486 TEST_HEADER include/spdk/gpt_spec.h 00:04:32.486 TEST_HEADER include/spdk/hexlify.h 00:04:32.486 TEST_HEADER include/spdk/histogram_data.h 00:04:32.486 CC test/thread/poller_perf/poller_perf.o 00:04:32.486 TEST_HEADER include/spdk/idxd.h 00:04:32.486 TEST_HEADER include/spdk/idxd_spec.h 00:04:32.486 TEST_HEADER include/spdk/init.h 00:04:32.486 TEST_HEADER include/spdk/ioat.h 00:04:32.486 TEST_HEADER include/spdk/ioat_spec.h 00:04:32.486 TEST_HEADER include/spdk/iscsi_spec.h 00:04:32.486 TEST_HEADER include/spdk/json.h 00:04:32.486 CC examples/util/zipf/zipf.o 00:04:32.486 TEST_HEADER include/spdk/jsonrpc.h 00:04:32.486 TEST_HEADER include/spdk/keyring.h 00:04:32.486 TEST_HEADER include/spdk/keyring_module.h 00:04:32.486 TEST_HEADER include/spdk/likely.h 00:04:32.486 TEST_HEADER include/spdk/log.h 00:04:32.486 TEST_HEADER include/spdk/lvol.h 00:04:32.486 TEST_HEADER include/spdk/memory.h 00:04:32.486 TEST_HEADER include/spdk/mmio.h 00:04:32.486 TEST_HEADER include/spdk/nbd.h 00:04:32.486 TEST_HEADER include/spdk/net.h 00:04:32.486 TEST_HEADER include/spdk/notify.h 00:04:32.486 TEST_HEADER include/spdk/nvme.h 00:04:32.486 TEST_HEADER include/spdk/nvme_intel.h 00:04:32.486 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:32.486 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:32.486 TEST_HEADER include/spdk/nvme_spec.h 00:04:32.486 CC test/dma/test_dma/test_dma.o 00:04:32.486 TEST_HEADER include/spdk/nvme_zns.h 00:04:32.486 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:32.486 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:32.486 TEST_HEADER include/spdk/nvmf.h 00:04:32.486 CC test/app/bdev_svc/bdev_svc.o 00:04:32.486 TEST_HEADER include/spdk/nvmf_spec.h 00:04:32.486 TEST_HEADER include/spdk/nvmf_transport.h 00:04:32.486 TEST_HEADER include/spdk/opal.h 00:04:32.486 TEST_HEADER include/spdk/opal_spec.h 00:04:32.486 TEST_HEADER include/spdk/pci_ids.h 00:04:32.486 TEST_HEADER include/spdk/pipe.h 00:04:32.486 TEST_HEADER include/spdk/queue.h 00:04:32.486 TEST_HEADER include/spdk/reduce.h 00:04:32.486 TEST_HEADER include/spdk/rpc.h 00:04:32.486 TEST_HEADER include/spdk/scheduler.h 00:04:32.486 TEST_HEADER include/spdk/scsi.h 00:04:32.486 TEST_HEADER include/spdk/scsi_spec.h 00:04:32.746 TEST_HEADER include/spdk/sock.h 00:04:32.746 TEST_HEADER include/spdk/stdinc.h 00:04:32.746 TEST_HEADER include/spdk/string.h 00:04:32.746 TEST_HEADER include/spdk/thread.h 00:04:32.746 TEST_HEADER include/spdk/trace.h 00:04:32.746 TEST_HEADER include/spdk/trace_parser.h 00:04:32.746 TEST_HEADER include/spdk/tree.h 00:04:32.746 LINK rpc_client_test 00:04:32.746 TEST_HEADER include/spdk/ublk.h 00:04:32.746 TEST_HEADER include/spdk/util.h 00:04:32.746 TEST_HEADER include/spdk/uuid.h 00:04:32.746 TEST_HEADER include/spdk/version.h 00:04:32.746 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:32.746 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:32.746 CC test/env/mem_callbacks/mem_callbacks.o 00:04:32.746 TEST_HEADER include/spdk/vhost.h 00:04:32.746 TEST_HEADER include/spdk/vmd.h 00:04:32.746 TEST_HEADER include/spdk/xor.h 00:04:32.746 TEST_HEADER include/spdk/zipf.h 00:04:32.746 CXX test/cpp_headers/accel.o 00:04:32.746 LINK nvmf_tgt 00:04:32.746 LINK poller_perf 00:04:32.746 LINK spdk_trace_record 00:04:32.746 LINK zipf 00:04:33.004 LINK bdev_svc 00:04:33.004 LINK spdk_trace 00:04:33.004 CXX test/cpp_headers/accel_module.o 00:04:33.004 CXX test/cpp_headers/assert.o 00:04:33.004 CXX test/cpp_headers/barrier.o 00:04:33.263 CXX test/cpp_headers/base64.o 00:04:33.263 LINK test_dma 00:04:33.263 CC examples/ioat/perf/perf.o 00:04:33.263 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:33.263 CC examples/vmd/lsvmd/lsvmd.o 00:04:33.263 CC examples/idxd/perf/perf.o 00:04:33.263 CXX test/cpp_headers/bdev.o 00:04:33.263 CC examples/thread/thread/thread_ex.o 00:04:33.263 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:33.521 CC app/iscsi_tgt/iscsi_tgt.o 00:04:33.521 LINK lsvmd 00:04:33.521 LINK mem_callbacks 00:04:33.521 LINK interrupt_tgt 00:04:33.521 LINK ioat_perf 00:04:33.521 CXX test/cpp_headers/bdev_module.o 00:04:33.521 CC test/app/histogram_perf/histogram_perf.o 00:04:33.778 LINK thread 00:04:33.778 LINK iscsi_tgt 00:04:33.778 CC test/env/vtophys/vtophys.o 00:04:33.778 CC examples/vmd/led/led.o 00:04:33.778 CXX test/cpp_headers/bdev_zone.o 00:04:33.778 LINK histogram_perf 00:04:33.778 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:33.778 CC examples/ioat/verify/verify.o 00:04:33.778 LINK idxd_perf 00:04:33.778 LINK vtophys 00:04:33.778 LINK nvme_fuzz 00:04:33.778 LINK led 00:04:34.035 LINK env_dpdk_post_init 00:04:34.035 CXX test/cpp_headers/bit_array.o 00:04:34.035 LINK verify 00:04:34.035 CC test/env/memory/memory_ut.o 00:04:34.035 CC app/spdk_lspci/spdk_lspci.o 00:04:34.293 CXX test/cpp_headers/bit_pool.o 00:04:34.293 CC app/spdk_nvme_perf/perf.o 00:04:34.293 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:34.293 CC app/spdk_tgt/spdk_tgt.o 00:04:34.293 CC examples/sock/hello_world/hello_sock.o 00:04:34.293 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:34.293 LINK spdk_lspci 00:04:34.550 CC test/event/event_perf/event_perf.o 00:04:34.550 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:34.550 CC test/nvme/aer/aer.o 00:04:34.550 CXX test/cpp_headers/blob_bdev.o 00:04:34.550 LINK spdk_tgt 00:04:34.550 LINK hello_sock 00:04:34.808 LINK event_perf 00:04:34.808 CXX test/cpp_headers/blobfs_bdev.o 00:04:34.808 LINK aer 00:04:35.067 CC test/accel/dif/dif.o 00:04:35.067 LINK vhost_fuzz 00:04:35.067 CC examples/accel/perf/accel_perf.o 00:04:35.067 CXX test/cpp_headers/blobfs.o 00:04:35.067 CC test/event/reactor/reactor.o 00:04:35.067 CC test/blobfs/mkfs/mkfs.o 00:04:35.067 CXX test/cpp_headers/blob.o 00:04:35.067 CC test/nvme/reset/reset.o 00:04:35.325 LINK reactor 00:04:35.325 CXX test/cpp_headers/conf.o 00:04:35.325 LINK mkfs 00:04:35.325 LINK memory_ut 00:04:35.583 LINK dif 00:04:35.583 CC test/event/reactor_perf/reactor_perf.o 00:04:35.583 CC test/lvol/esnap/esnap.o 00:04:35.583 CXX test/cpp_headers/config.o 00:04:35.583 CXX test/cpp_headers/cpuset.o 00:04:35.583 LINK reset 00:04:35.584 LINK accel_perf 00:04:35.584 LINK reactor_perf 00:04:35.584 LINK spdk_nvme_perf 00:04:35.841 CC test/env/pci/pci_ut.o 00:04:35.841 CC test/event/app_repeat/app_repeat.o 00:04:35.841 CXX test/cpp_headers/crc16.o 00:04:35.841 CC test/nvme/sgl/sgl.o 00:04:36.098 LINK app_repeat 00:04:36.098 CXX test/cpp_headers/crc32.o 00:04:36.098 CC app/spdk_nvme_identify/identify.o 00:04:36.098 CC examples/nvme/hello_world/hello_world.o 00:04:36.098 CC examples/blob/hello_world/hello_blob.o 00:04:36.098 CC examples/bdev/hello_world/hello_bdev.o 00:04:36.356 CXX test/cpp_headers/crc64.o 00:04:36.356 LINK hello_world 00:04:36.356 LINK pci_ut 00:04:36.356 LINK hello_bdev 00:04:36.356 LINK hello_blob 00:04:36.356 LINK sgl 00:04:36.356 CC test/event/scheduler/scheduler.o 00:04:36.356 CXX test/cpp_headers/dif.o 00:04:36.613 LINK iscsi_fuzz 00:04:36.613 CC examples/nvme/reconnect/reconnect.o 00:04:36.613 CXX test/cpp_headers/dma.o 00:04:36.871 CC test/nvme/e2edp/nvme_dp.o 00:04:36.871 CXX test/cpp_headers/endian.o 00:04:36.871 LINK scheduler 00:04:36.871 CC examples/bdev/bdevperf/bdevperf.o 00:04:36.871 CC test/bdev/bdevio/bdevio.o 00:04:36.871 CXX test/cpp_headers/env_dpdk.o 00:04:36.871 CC examples/blob/cli/blobcli.o 00:04:37.129 LINK reconnect 00:04:37.129 LINK spdk_nvme_identify 00:04:37.129 CC test/app/jsoncat/jsoncat.o 00:04:37.129 CXX test/cpp_headers/env.o 00:04:37.129 CC test/app/stub/stub.o 00:04:37.387 LINK jsoncat 00:04:37.387 LINK nvme_dp 00:04:37.387 CXX test/cpp_headers/event.o 00:04:37.387 CC app/spdk_nvme_discover/discovery_aer.o 00:04:37.387 LINK stub 00:04:37.387 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:37.387 CXX test/cpp_headers/fd_group.o 00:04:37.646 LINK bdevio 00:04:37.646 CC examples/nvme/arbitration/arbitration.o 00:04:37.646 LINK blobcli 00:04:37.646 CC test/nvme/overhead/overhead.o 00:04:37.646 LINK spdk_nvme_discover 00:04:37.646 CXX test/cpp_headers/fd.o 00:04:37.904 CC app/spdk_top/spdk_top.o 00:04:37.904 LINK bdevperf 00:04:37.904 LINK arbitration 00:04:37.904 CC examples/nvme/hotplug/hotplug.o 00:04:37.904 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:37.904 LINK overhead 00:04:37.904 CXX test/cpp_headers/file.o 00:04:38.163 CC examples/nvme/abort/abort.o 00:04:38.163 CXX test/cpp_headers/ftl.o 00:04:38.163 LINK cmb_copy 00:04:38.163 LINK hotplug 00:04:38.163 CXX test/cpp_headers/gpt_spec.o 00:04:38.421 CXX test/cpp_headers/hexlify.o 00:04:38.421 LINK nvme_manage 00:04:38.421 CC test/nvme/err_injection/err_injection.o 00:04:38.421 CXX test/cpp_headers/histogram_data.o 00:04:38.421 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:38.678 CC test/nvme/startup/startup.o 00:04:38.678 CXX test/cpp_headers/idxd.o 00:04:38.678 CC test/nvme/reserve/reserve.o 00:04:38.678 LINK abort 00:04:38.678 CC test/nvme/simple_copy/simple_copy.o 00:04:38.678 LINK err_injection 00:04:38.678 CC app/vhost/vhost.o 00:04:38.678 CXX test/cpp_headers/idxd_spec.o 00:04:38.678 LINK pmr_persistence 00:04:38.678 LINK startup 00:04:38.936 CXX test/cpp_headers/init.o 00:04:38.936 LINK reserve 00:04:38.936 LINK vhost 00:04:38.936 LINK simple_copy 00:04:38.936 CC app/spdk_dd/spdk_dd.o 00:04:38.936 LINK spdk_top 00:04:38.936 CC test/nvme/connect_stress/connect_stress.o 00:04:39.194 CC test/nvme/boot_partition/boot_partition.o 00:04:39.194 CXX test/cpp_headers/ioat.o 00:04:39.194 CC examples/nvmf/nvmf/nvmf.o 00:04:39.194 CC test/nvme/compliance/nvme_compliance.o 00:04:39.194 LINK connect_stress 00:04:39.194 CC test/nvme/fused_ordering/fused_ordering.o 00:04:39.194 CXX test/cpp_headers/ioat_spec.o 00:04:39.452 LINK boot_partition 00:04:39.452 LINK spdk_dd 00:04:39.452 CC app/fio/nvme/fio_plugin.o 00:04:39.452 CXX test/cpp_headers/iscsi_spec.o 00:04:39.452 LINK fused_ordering 00:04:39.452 CC app/fio/bdev/fio_plugin.o 00:04:39.452 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:39.452 LINK nvmf 00:04:39.710 CXX test/cpp_headers/json.o 00:04:39.710 CXX test/cpp_headers/jsonrpc.o 00:04:39.710 LINK nvme_compliance 00:04:39.710 CC test/nvme/cuse/cuse.o 00:04:39.710 CC test/nvme/fdp/fdp.o 00:04:39.710 CXX test/cpp_headers/keyring.o 00:04:39.710 LINK doorbell_aers 00:04:39.968 CXX test/cpp_headers/keyring_module.o 00:04:39.968 CXX test/cpp_headers/likely.o 00:04:39.968 CXX test/cpp_headers/log.o 00:04:39.968 CXX test/cpp_headers/lvol.o 00:04:39.968 CXX test/cpp_headers/memory.o 00:04:39.968 LINK spdk_bdev 00:04:40.226 LINK fdp 00:04:40.226 CXX test/cpp_headers/mmio.o 00:04:40.226 CXX test/cpp_headers/nbd.o 00:04:40.226 CXX test/cpp_headers/net.o 00:04:40.226 CXX test/cpp_headers/notify.o 00:04:40.226 CXX test/cpp_headers/nvme.o 00:04:40.226 CXX test/cpp_headers/nvme_intel.o 00:04:40.226 CXX test/cpp_headers/nvme_ocssd.o 00:04:40.226 LINK spdk_nvme 00:04:40.483 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:40.483 CXX test/cpp_headers/nvme_spec.o 00:04:40.483 CXX test/cpp_headers/nvme_zns.o 00:04:40.483 CXX test/cpp_headers/nvmf_cmd.o 00:04:40.483 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:40.483 CXX test/cpp_headers/nvmf.o 00:04:40.483 CXX test/cpp_headers/nvmf_spec.o 00:04:40.483 CXX test/cpp_headers/nvmf_transport.o 00:04:40.483 CXX test/cpp_headers/opal.o 00:04:40.741 CXX test/cpp_headers/opal_spec.o 00:04:40.741 CXX test/cpp_headers/pci_ids.o 00:04:40.741 CXX test/cpp_headers/pipe.o 00:04:40.741 CXX test/cpp_headers/queue.o 00:04:40.741 CXX test/cpp_headers/reduce.o 00:04:40.741 CXX test/cpp_headers/rpc.o 00:04:40.741 CXX test/cpp_headers/scheduler.o 00:04:40.741 CXX test/cpp_headers/scsi.o 00:04:40.741 CXX test/cpp_headers/scsi_spec.o 00:04:40.741 CXX test/cpp_headers/sock.o 00:04:41.000 CXX test/cpp_headers/stdinc.o 00:04:41.000 CXX test/cpp_headers/string.o 00:04:41.000 CXX test/cpp_headers/thread.o 00:04:41.000 CXX test/cpp_headers/trace.o 00:04:41.000 CXX test/cpp_headers/trace_parser.o 00:04:41.000 CXX test/cpp_headers/tree.o 00:04:41.000 CXX test/cpp_headers/ublk.o 00:04:41.000 CXX test/cpp_headers/util.o 00:04:41.000 CXX test/cpp_headers/uuid.o 00:04:41.000 CXX test/cpp_headers/version.o 00:04:41.000 CXX test/cpp_headers/vfio_user_pci.o 00:04:41.000 CXX test/cpp_headers/vfio_user_spec.o 00:04:41.000 CXX test/cpp_headers/vhost.o 00:04:41.000 CXX test/cpp_headers/vmd.o 00:04:41.000 CXX test/cpp_headers/xor.o 00:04:41.000 CXX test/cpp_headers/zipf.o 00:04:41.258 LINK cuse 00:04:42.637 LINK esnap 00:04:43.206 00:04:43.206 real 1m12.567s 00:04:43.206 user 7m6.594s 00:04:43.206 sys 1m34.176s 00:04:43.206 08:46:50 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:43.206 08:46:50 make -- common/autotest_common.sh@10 -- $ set +x 00:04:43.206 ************************************ 00:04:43.206 END TEST make 00:04:43.206 ************************************ 00:04:43.206 08:46:50 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:43.206 08:46:50 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:43.206 08:46:50 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:43.206 08:46:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.206 08:46:50 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:43.206 08:46:50 -- pm/common@44 -- $ pid=5364 00:04:43.206 08:46:50 -- pm/common@50 -- $ kill -TERM 5364 00:04:43.206 08:46:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.206 08:46:50 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:43.206 08:46:50 -- pm/common@44 -- $ pid=5366 00:04:43.206 08:46:50 -- pm/common@50 -- $ kill -TERM 5366 00:04:43.206 08:46:50 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:43.206 08:46:50 -- nvmf/common.sh@7 -- # uname -s 00:04:43.206 08:46:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.206 08:46:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.206 08:46:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.206 08:46:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.206 08:46:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.206 08:46:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.206 08:46:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.206 08:46:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.206 08:46:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.206 08:46:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.206 08:46:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dec25852-ab30-4fdb-92ca-55715b3a612a 00:04:43.206 08:46:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=dec25852-ab30-4fdb-92ca-55715b3a612a 00:04:43.206 08:46:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.206 08:46:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.206 08:46:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.206 08:46:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.206 08:46:50 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.206 08:46:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.206 08:46:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.206 08:46:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.206 08:46:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.207 08:46:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.207 08:46:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.207 08:46:50 -- paths/export.sh@5 -- # export PATH 00:04:43.207 08:46:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.207 08:46:50 -- nvmf/common.sh@47 -- # : 0 00:04:43.207 08:46:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:43.207 08:46:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:43.207 08:46:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.207 08:46:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.207 08:46:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.207 08:46:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:43.207 08:46:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:43.207 08:46:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:43.207 08:46:50 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:43.207 08:46:50 -- spdk/autotest.sh@32 -- # uname -s 00:04:43.207 08:46:50 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:43.207 08:46:50 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:43.207 08:46:50 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:43.207 08:46:50 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:43.207 08:46:50 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:43.207 08:46:50 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:43.207 08:46:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:43.207 08:46:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:43.207 08:46:50 -- spdk/autotest.sh@48 -- # udevadm_pid=53099 00:04:43.207 08:46:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:43.207 08:46:50 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:43.207 08:46:50 -- pm/common@17 -- # local monitor 00:04:43.207 08:46:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.207 08:46:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.207 08:46:50 -- pm/common@25 -- # sleep 1 00:04:43.207 08:46:50 -- pm/common@21 -- # date +%s 00:04:43.207 08:46:50 -- pm/common@21 -- # date +%s 00:04:43.207 08:46:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721897210 00:04:43.207 08:46:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721897210 00:04:43.207 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721897210_collect-vmstat.pm.log 00:04:43.207 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721897210_collect-cpu-load.pm.log 00:04:44.587 08:46:51 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:44.587 08:46:51 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:44.587 08:46:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.587 08:46:51 -- common/autotest_common.sh@10 -- # set +x 00:04:44.587 08:46:51 -- spdk/autotest.sh@59 -- # create_test_list 00:04:44.587 08:46:51 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:44.587 08:46:51 -- common/autotest_common.sh@10 -- # set +x 00:04:44.587 08:46:51 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:44.587 08:46:51 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:44.587 08:46:51 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:44.587 08:46:51 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:44.587 08:46:51 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:44.587 08:46:51 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:44.587 08:46:51 -- common/autotest_common.sh@1455 -- # uname 00:04:44.587 08:46:51 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:44.587 08:46:51 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:44.587 08:46:51 -- common/autotest_common.sh@1475 -- # uname 00:04:44.587 08:46:51 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:44.587 08:46:51 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:44.587 08:46:51 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:44.587 08:46:51 -- spdk/autotest.sh@72 -- # hash lcov 00:04:44.587 08:46:51 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:44.587 08:46:51 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:44.587 --rc lcov_branch_coverage=1 00:04:44.587 --rc lcov_function_coverage=1 00:04:44.587 --rc genhtml_branch_coverage=1 00:04:44.587 --rc genhtml_function_coverage=1 00:04:44.587 --rc genhtml_legend=1 00:04:44.587 --rc geninfo_all_blocks=1 00:04:44.587 ' 00:04:44.587 08:46:51 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:44.587 --rc lcov_branch_coverage=1 00:04:44.587 --rc lcov_function_coverage=1 00:04:44.587 --rc genhtml_branch_coverage=1 00:04:44.587 --rc genhtml_function_coverage=1 00:04:44.587 --rc genhtml_legend=1 00:04:44.587 --rc geninfo_all_blocks=1 00:04:44.587 ' 00:04:44.587 08:46:51 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:44.587 --rc lcov_branch_coverage=1 00:04:44.587 --rc lcov_function_coverage=1 00:04:44.587 --rc genhtml_branch_coverage=1 00:04:44.587 --rc genhtml_function_coverage=1 00:04:44.587 --rc genhtml_legend=1 00:04:44.587 --rc geninfo_all_blocks=1 00:04:44.587 --no-external' 00:04:44.587 08:46:51 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:44.587 --rc lcov_branch_coverage=1 00:04:44.587 --rc lcov_function_coverage=1 00:04:44.587 --rc genhtml_branch_coverage=1 00:04:44.587 --rc genhtml_function_coverage=1 00:04:44.587 --rc genhtml_legend=1 00:04:44.587 --rc geninfo_all_blocks=1 00:04:44.587 --no-external' 00:04:44.587 08:46:51 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:44.587 lcov: LCOV version 1.14 00:04:44.587 08:46:51 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:59.479 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:59.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:11.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:11.703 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:11.704 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:11.704 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:11.704 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:11.704 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:11.704 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:11.704 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:11.704 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:11.704 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:11.704 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:11.704 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:11.704 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:11.704 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:11.704 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:11.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:11.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:11.965 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:11.965 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:11.965 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:11.965 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:11.965 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:12.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:15.515 08:47:22 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:15.515 08:47:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.515 08:47:22 -- common/autotest_common.sh@10 -- # set +x 00:05:15.515 08:47:22 -- spdk/autotest.sh@91 -- # rm -f 00:05:15.515 08:47:22 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:16.082 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.082 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:16.343 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:16.343 08:47:23 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:16.343 08:47:23 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:16.343 08:47:23 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:16.343 08:47:23 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:16.343 08:47:23 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:16.343 08:47:23 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:16.343 08:47:23 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:16.343 08:47:23 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:16.343 08:47:23 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:16.343 08:47:23 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:16.343 08:47:23 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:16.343 08:47:23 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:16.343 08:47:23 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:16.343 08:47:23 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:16.343 08:47:23 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:16.343 08:47:23 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:16.343 08:47:23 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:16.343 08:47:23 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:16.343 08:47:23 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:16.343 08:47:23 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:16.343 08:47:23 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:16.343 08:47:23 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:16.343 08:47:23 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:16.343 08:47:23 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:16.343 08:47:23 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:16.343 08:47:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:16.343 08:47:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:16.343 08:47:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:16.343 08:47:23 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:16.343 08:47:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:16.343 No valid GPT data, bailing 00:05:16.343 08:47:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:16.343 08:47:23 -- scripts/common.sh@391 -- # pt= 00:05:16.343 08:47:23 -- scripts/common.sh@392 -- # return 1 00:05:16.343 08:47:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:16.343 1+0 records in 00:05:16.343 1+0 records out 00:05:16.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00613368 s, 171 MB/s 00:05:16.343 08:47:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:16.343 08:47:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:16.343 08:47:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:16.343 08:47:23 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:16.343 08:47:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:16.343 No valid GPT data, bailing 00:05:16.343 08:47:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:16.343 08:47:23 -- scripts/common.sh@391 -- # pt= 00:05:16.343 08:47:23 -- scripts/common.sh@392 -- # return 1 00:05:16.343 08:47:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:16.343 1+0 records in 00:05:16.343 1+0 records out 00:05:16.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00642634 s, 163 MB/s 00:05:16.343 08:47:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:16.343 08:47:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:16.343 08:47:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:16.343 08:47:23 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:16.343 08:47:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:16.343 No valid GPT data, bailing 00:05:16.603 08:47:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:16.603 08:47:23 -- scripts/common.sh@391 -- # pt= 00:05:16.603 08:47:23 -- scripts/common.sh@392 -- # return 1 00:05:16.603 08:47:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:16.603 1+0 records in 00:05:16.603 1+0 records out 00:05:16.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0064557 s, 162 MB/s 00:05:16.603 08:47:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:16.603 08:47:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:16.603 08:47:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:16.603 08:47:23 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:16.603 08:47:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:16.603 No valid GPT data, bailing 00:05:16.603 08:47:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:16.603 08:47:23 -- scripts/common.sh@391 -- # pt= 00:05:16.603 08:47:23 -- scripts/common.sh@392 -- # return 1 00:05:16.603 08:47:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:16.603 1+0 records in 00:05:16.603 1+0 records out 00:05:16.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494274 s, 212 MB/s 00:05:16.603 08:47:23 -- spdk/autotest.sh@118 -- # sync 00:05:16.603 08:47:23 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:16.603 08:47:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:16.603 08:47:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:19.139 08:47:26 -- spdk/autotest.sh@124 -- # uname -s 00:05:19.139 08:47:26 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:19.139 08:47:26 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:19.139 08:47:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.139 08:47:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.139 08:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:19.139 ************************************ 00:05:19.139 START TEST setup.sh 00:05:19.139 ************************************ 00:05:19.139 08:47:26 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:19.139 * Looking for test storage... 00:05:19.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:19.139 08:47:26 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:19.139 08:47:26 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:19.139 08:47:26 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:19.139 08:47:26 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.139 08:47:26 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.139 08:47:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:19.139 ************************************ 00:05:19.139 START TEST acl 00:05:19.139 ************************************ 00:05:19.139 08:47:26 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:19.399 * Looking for test storage... 00:05:19.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:19.399 08:47:26 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:19.399 08:47:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:19.399 08:47:26 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:19.399 08:47:26 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:19.399 08:47:26 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:19.399 08:47:26 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:19.399 08:47:26 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:19.399 08:47:26 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:19.399 08:47:26 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.338 08:47:27 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:20.338 08:47:27 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:20.338 08:47:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.338 08:47:27 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:20.338 08:47:27 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.338 08:47:27 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.909 Hugepages 00:05:20.909 node hugesize free / total 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.909 00:05:20.909 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:20.909 08:47:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:21.272 08:47:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:21.272 08:47:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:21.272 08:47:28 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:21.272 08:47:28 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:21.272 08:47:28 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:21.272 08:47:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:21.272 08:47:28 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:21.272 08:47:28 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:21.272 08:47:28 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.272 08:47:28 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.272 08:47:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:21.272 ************************************ 00:05:21.272 START TEST denied 00:05:21.272 ************************************ 00:05:21.272 08:47:28 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:05:21.272 08:47:28 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:21.272 08:47:28 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:21.272 08:47:28 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:21.272 08:47:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.272 08:47:28 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.218 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:22.218 08:47:29 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:22.218 08:47:29 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:22.218 08:47:29 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:22.218 08:47:29 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:22.218 08:47:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:22.218 08:47:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:22.218 08:47:29 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:22.218 08:47:29 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:22.218 08:47:29 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:22.218 08:47:29 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:22.798 00:05:22.798 real 0m1.697s 00:05:22.798 user 0m0.625s 00:05:22.798 sys 0m1.054s 00:05:22.798 08:47:29 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.798 08:47:29 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:22.798 ************************************ 00:05:22.798 END TEST denied 00:05:22.798 ************************************ 00:05:22.798 08:47:29 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:22.798 08:47:29 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.798 08:47:29 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.798 08:47:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:22.798 ************************************ 00:05:22.798 START TEST allowed 00:05:22.798 ************************************ 00:05:22.798 08:47:29 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:05:22.798 08:47:29 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:22.798 08:47:29 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:22.798 08:47:29 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:22.798 08:47:29 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.798 08:47:29 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:23.736 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.736 08:47:30 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:23.736 08:47:30 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:23.736 08:47:30 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:23.736 08:47:30 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:23.736 08:47:30 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:23.736 08:47:30 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:23.736 08:47:30 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:23.736 08:47:30 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:23.736 08:47:30 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:23.736 08:47:30 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.676 00:05:24.676 real 0m1.702s 00:05:24.676 user 0m0.691s 00:05:24.676 sys 0m1.030s 00:05:24.676 08:47:31 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.676 08:47:31 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:24.676 ************************************ 00:05:24.676 END TEST allowed 00:05:24.676 ************************************ 00:05:24.676 00:05:24.676 real 0m5.341s 00:05:24.676 user 0m2.144s 00:05:24.676 sys 0m3.230s 00:05:24.676 08:47:31 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.676 08:47:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:24.676 ************************************ 00:05:24.676 END TEST acl 00:05:24.676 ************************************ 00:05:24.676 08:47:31 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:24.676 08:47:31 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.676 08:47:31 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.676 08:47:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:24.676 ************************************ 00:05:24.676 START TEST hugepages 00:05:24.676 ************************************ 00:05:24.676 08:47:31 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:24.676 * Looking for test storage... 00:05:24.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 5811044 kB' 'MemAvailable: 7389168 kB' 'Buffers: 2436 kB' 'Cached: 1792280 kB' 'SwapCached: 0 kB' 'Active: 435932 kB' 'Inactive: 1464176 kB' 'Active(anon): 115880 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 107024 kB' 'Mapped: 48616 kB' 'Shmem: 10488 kB' 'KReclaimable: 62116 kB' 'Slab: 137428 kB' 'SReclaimable: 62116 kB' 'SUnreclaim: 75312 kB' 'KernelStack: 6508 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 336592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.676 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.677 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:24.678 08:47:31 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:24.678 08:47:31 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.678 08:47:31 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.678 08:47:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:24.678 ************************************ 00:05:24.678 START TEST default_setup 00:05:24.678 ************************************ 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:24.678 08:47:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:24.679 08:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.679 08:47:31 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.618 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.618 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7916712 kB' 'MemAvailable: 9494668 kB' 'Buffers: 2436 kB' 'Cached: 1792272 kB' 'SwapCached: 0 kB' 'Active: 452596 kB' 'Inactive: 1464176 kB' 'Active(anon): 132544 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123656 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 61780 kB' 'Slab: 137200 kB' 'SReclaimable: 61780 kB' 'SUnreclaim: 75420 kB' 'KernelStack: 6448 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.618 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.619 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.620 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7916996 kB' 'MemAvailable: 9494952 kB' 'Buffers: 2436 kB' 'Cached: 1792272 kB' 'SwapCached: 0 kB' 'Active: 452168 kB' 'Inactive: 1464176 kB' 'Active(anon): 132116 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123260 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 61780 kB' 'Slab: 137208 kB' 'SReclaimable: 61780 kB' 'SUnreclaim: 75428 kB' 'KernelStack: 6464 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.885 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.886 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7916524 kB' 'MemAvailable: 9494480 kB' 'Buffers: 2436 kB' 'Cached: 1792272 kB' 'SwapCached: 0 kB' 'Active: 452116 kB' 'Inactive: 1464176 kB' 'Active(anon): 132064 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123192 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 61780 kB' 'Slab: 137196 kB' 'SReclaimable: 61780 kB' 'SUnreclaim: 75416 kB' 'KernelStack: 6464 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.887 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.888 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:25.889 nr_hugepages=1024 00:05:25.889 resv_hugepages=0 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:25.889 surplus_hugepages=0 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:25.889 anon_hugepages=0 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7916524 kB' 'MemAvailable: 9494480 kB' 'Buffers: 2436 kB' 'Cached: 1792272 kB' 'SwapCached: 0 kB' 'Active: 452092 kB' 'Inactive: 1464176 kB' 'Active(anon): 132040 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123164 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 61780 kB' 'Slab: 137184 kB' 'SReclaimable: 61780 kB' 'SUnreclaim: 75404 kB' 'KernelStack: 6464 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.889 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.890 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7916524 kB' 'MemUsed: 4325444 kB' 'SwapCached: 0 kB' 'Active: 452060 kB' 'Inactive: 1464176 kB' 'Active(anon): 132008 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1794708 kB' 'Mapped: 48620 kB' 'AnonPages: 123128 kB' 'Shmem: 10464 kB' 'KernelStack: 6464 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61780 kB' 'Slab: 137184 kB' 'SReclaimable: 61780 kB' 'SUnreclaim: 75404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.891 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.892 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:25.893 node0=1024 expecting 1024 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:25.893 00:05:25.893 real 0m1.035s 00:05:25.893 user 0m0.456s 00:05:25.893 sys 0m0.546s 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.893 08:47:32 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:25.893 ************************************ 00:05:25.893 END TEST default_setup 00:05:25.893 ************************************ 00:05:25.893 08:47:32 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:25.893 08:47:32 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.893 08:47:32 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.893 08:47:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:25.893 ************************************ 00:05:25.893 START TEST per_node_1G_alloc 00:05:25.893 ************************************ 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.893 08:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.467 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.467 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.467 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8962416 kB' 'MemAvailable: 10540384 kB' 'Buffers: 2436 kB' 'Cached: 1792272 kB' 'SwapCached: 0 kB' 'Active: 452492 kB' 'Inactive: 1464192 kB' 'Active(anon): 132440 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464192 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123316 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 61772 kB' 'Slab: 137156 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75384 kB' 'KernelStack: 6480 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.467 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8962164 kB' 'MemAvailable: 10540132 kB' 'Buffers: 2436 kB' 'Cached: 1792272 kB' 'SwapCached: 0 kB' 'Active: 452192 kB' 'Inactive: 1464192 kB' 'Active(anon): 132140 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464192 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123324 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 61772 kB' 'Slab: 137152 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75380 kB' 'KernelStack: 6464 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.468 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.469 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8962164 kB' 'MemAvailable: 10540132 kB' 'Buffers: 2436 kB' 'Cached: 1792272 kB' 'SwapCached: 0 kB' 'Active: 452076 kB' 'Inactive: 1464192 kB' 'Active(anon): 132024 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464192 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123208 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 61772 kB' 'Slab: 137152 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75380 kB' 'KernelStack: 6464 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.470 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:26.471 nr_hugepages=512 00:05:26.471 resv_hugepages=0 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:26.471 surplus_hugepages=0 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:26.471 anon_hugepages=0 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8962164 kB' 'MemAvailable: 10540132 kB' 'Buffers: 2436 kB' 'Cached: 1792272 kB' 'SwapCached: 0 kB' 'Active: 452024 kB' 'Inactive: 1464192 kB' 'Active(anon): 131972 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464192 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123160 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 61772 kB' 'Slab: 137152 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75380 kB' 'KernelStack: 6448 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.471 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8962684 kB' 'MemUsed: 3279284 kB' 'SwapCached: 0 kB' 'Active: 452268 kB' 'Inactive: 1464192 kB' 'Active(anon): 132216 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464192 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1794708 kB' 'Mapped: 48616 kB' 'AnonPages: 123408 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61772 kB' 'Slab: 137148 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.472 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:26.473 node0=512 expecting 512 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:26.473 00:05:26.473 real 0m0.624s 00:05:26.473 user 0m0.280s 00:05:26.473 sys 0m0.384s 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.473 08:47:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:26.473 ************************************ 00:05:26.473 END TEST per_node_1G_alloc 00:05:26.473 ************************************ 00:05:26.473 08:47:33 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:26.473 08:47:33 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.473 08:47:33 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.473 08:47:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:26.473 ************************************ 00:05:26.473 START TEST even_2G_alloc 00:05:26.473 ************************************ 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.473 08:47:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.048 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.048 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.048 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7913796 kB' 'MemAvailable: 9491764 kB' 'Buffers: 2436 kB' 'Cached: 1792272 kB' 'SwapCached: 0 kB' 'Active: 452564 kB' 'Inactive: 1464192 kB' 'Active(anon): 132512 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464192 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123360 kB' 'Mapped: 48744 kB' 'Shmem: 10464 kB' 'KReclaimable: 61772 kB' 'Slab: 137288 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75516 kB' 'KernelStack: 6496 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.048 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.049 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7913796 kB' 'MemAvailable: 9491764 kB' 'Buffers: 2436 kB' 'Cached: 1792272 kB' 'SwapCached: 0 kB' 'Active: 452256 kB' 'Inactive: 1464192 kB' 'Active(anon): 132204 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464192 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123360 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 61772 kB' 'Slab: 137268 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75496 kB' 'KernelStack: 6448 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.050 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.051 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7914264 kB' 'MemAvailable: 9492236 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 452040 kB' 'Inactive: 1464196 kB' 'Active(anon): 131988 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123184 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 61772 kB' 'Slab: 137256 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75484 kB' 'KernelStack: 6480 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.052 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.053 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.054 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:27.055 nr_hugepages=1024 00:05:27.055 resv_hugepages=0 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:27.055 surplus_hugepages=0 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:27.055 anon_hugepages=0 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7914264 kB' 'MemAvailable: 9492236 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 452076 kB' 'Inactive: 1464196 kB' 'Active(anon): 132024 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123220 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 61772 kB' 'Slab: 137256 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75484 kB' 'KernelStack: 6496 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.055 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.316 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.317 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.318 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7914264 kB' 'MemUsed: 4327704 kB' 'SwapCached: 0 kB' 'Active: 452240 kB' 'Inactive: 1464196 kB' 'Active(anon): 132188 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1794712 kB' 'Mapped: 48620 kB' 'AnonPages: 123328 kB' 'Shmem: 10464 kB' 'KernelStack: 6464 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61772 kB' 'Slab: 137256 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.319 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.320 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:27.321 node0=1024 expecting 1024 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:27.321 00:05:27.321 real 0m0.673s 00:05:27.321 user 0m0.335s 00:05:27.321 sys 0m0.380s 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.321 08:47:34 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:27.321 ************************************ 00:05:27.321 END TEST even_2G_alloc 00:05:27.321 ************************************ 00:05:27.321 08:47:34 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:27.321 08:47:34 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.321 08:47:34 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.321 08:47:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:27.321 ************************************ 00:05:27.321 START TEST odd_alloc 00:05:27.321 ************************************ 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:27.321 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:27.322 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:27.322 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:27.322 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:27.322 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.322 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.896 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.896 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.896 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7915436 kB' 'MemAvailable: 9493408 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 452592 kB' 'Inactive: 1464196 kB' 'Active(anon): 132540 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123388 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 61772 kB' 'Slab: 137276 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75504 kB' 'KernelStack: 6496 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.896 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7915436 kB' 'MemAvailable: 9493408 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 452336 kB' 'Inactive: 1464196 kB' 'Active(anon): 132284 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123436 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 61772 kB' 'Slab: 137260 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75488 kB' 'KernelStack: 6480 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.897 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7920824 kB' 'MemAvailable: 9498796 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 452272 kB' 'Inactive: 1464196 kB' 'Active(anon): 132220 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123328 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 61772 kB' 'Slab: 137260 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75488 kB' 'KernelStack: 6464 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.898 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:27.899 nr_hugepages=1025 00:05:27.899 resv_hugepages=0 00:05:27.899 surplus_hugepages=0 00:05:27.899 anon_hugepages=0 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7920348 kB' 'MemAvailable: 9498320 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 452268 kB' 'Inactive: 1464196 kB' 'Active(anon): 132216 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123328 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 61772 kB' 'Slab: 137260 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75488 kB' 'KernelStack: 6464 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.899 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7920348 kB' 'MemUsed: 4321620 kB' 'SwapCached: 0 kB' 'Active: 452276 kB' 'Inactive: 1464196 kB' 'Active(anon): 132224 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1794712 kB' 'Mapped: 48620 kB' 'AnonPages: 123328 kB' 'Shmem: 10464 kB' 'KernelStack: 6464 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61772 kB' 'Slab: 137260 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.900 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:27.901 node0=1025 expecting 1025 00:05:27.901 ************************************ 00:05:27.901 END TEST odd_alloc 00:05:27.901 ************************************ 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:27.901 00:05:27.901 real 0m0.725s 00:05:27.901 user 0m0.344s 00:05:27.901 sys 0m0.405s 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.901 08:47:35 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:28.161 08:47:35 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:28.161 08:47:35 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.161 08:47:35 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.161 08:47:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:28.161 ************************************ 00:05:28.161 START TEST custom_alloc 00:05:28.161 ************************************ 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.161 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:28.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.686 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:28.686 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8974800 kB' 'MemAvailable: 10552772 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 452608 kB' 'Inactive: 1464196 kB' 'Active(anon): 132556 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123408 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 61772 kB' 'Slab: 137272 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75500 kB' 'KernelStack: 6496 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.686 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8974800 kB' 'MemAvailable: 10552772 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 452292 kB' 'Inactive: 1464196 kB' 'Active(anon): 132240 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123340 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 61772 kB' 'Slab: 137256 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75484 kB' 'KernelStack: 6464 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.687 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.688 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8974800 kB' 'MemAvailable: 10552772 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 452284 kB' 'Inactive: 1464196 kB' 'Active(anon): 132232 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123340 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 61772 kB' 'Slab: 137256 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75484 kB' 'KernelStack: 6464 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.689 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:28.690 nr_hugepages=512 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:28.690 resv_hugepages=0 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:28.690 surplus_hugepages=0 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:28.690 anon_hugepages=0 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8974800 kB' 'MemAvailable: 10552772 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 452280 kB' 'Inactive: 1464196 kB' 'Active(anon): 132228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123340 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 61772 kB' 'Slab: 137256 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75484 kB' 'KernelStack: 6464 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.690 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8974800 kB' 'MemUsed: 3267168 kB' 'SwapCached: 0 kB' 'Active: 452284 kB' 'Inactive: 1464196 kB' 'Active(anon): 132232 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1794712 kB' 'Mapped: 48620 kB' 'AnonPages: 123340 kB' 'Shmem: 10464 kB' 'KernelStack: 6464 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61772 kB' 'Slab: 137256 kB' 'SReclaimable: 61772 kB' 'SUnreclaim: 75484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.691 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.692 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:28.951 node0=512 expecting 512 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:28.951 00:05:28.951 real 0m0.734s 00:05:28.951 user 0m0.363s 00:05:28.951 sys 0m0.388s 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.951 08:47:35 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:28.951 ************************************ 00:05:28.951 END TEST custom_alloc 00:05:28.951 ************************************ 00:05:28.951 08:47:35 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:28.951 08:47:35 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.951 08:47:35 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.951 08:47:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:28.951 ************************************ 00:05:28.951 START TEST no_shrink_alloc 00:05:28.951 ************************************ 00:05:28.951 08:47:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:28.951 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:28.951 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:28.951 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:28.951 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.952 08:47:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.211 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.474 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.474 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927780 kB' 'MemAvailable: 9505728 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 447712 kB' 'Inactive: 1464196 kB' 'Active(anon): 127660 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118580 kB' 'Mapped: 47988 kB' 'Shmem: 10464 kB' 'KReclaimable: 61728 kB' 'Slab: 136964 kB' 'SReclaimable: 61728 kB' 'SUnreclaim: 75236 kB' 'KernelStack: 6384 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.474 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.475 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927892 kB' 'MemAvailable: 9505840 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 447156 kB' 'Inactive: 1464196 kB' 'Active(anon): 127104 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118496 kB' 'Mapped: 47884 kB' 'Shmem: 10464 kB' 'KReclaimable: 61728 kB' 'Slab: 136960 kB' 'SReclaimable: 61728 kB' 'SUnreclaim: 75232 kB' 'KernelStack: 6352 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.476 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.477 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927892 kB' 'MemAvailable: 9505840 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 447232 kB' 'Inactive: 1464196 kB' 'Active(anon): 127180 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118592 kB' 'Mapped: 47884 kB' 'Shmem: 10464 kB' 'KReclaimable: 61728 kB' 'Slab: 136960 kB' 'SReclaimable: 61728 kB' 'SUnreclaim: 75232 kB' 'KernelStack: 6368 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.478 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:29.479 nr_hugepages=1024 00:05:29.479 resv_hugepages=0 00:05:29.479 surplus_hugepages=0 00:05:29.479 anon_hugepages=0 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:29.479 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927392 kB' 'MemAvailable: 9505340 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 447236 kB' 'Inactive: 1464196 kB' 'Active(anon): 127184 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118592 kB' 'Mapped: 47884 kB' 'Shmem: 10464 kB' 'KReclaimable: 61728 kB' 'Slab: 136960 kB' 'SReclaimable: 61728 kB' 'SUnreclaim: 75232 kB' 'KernelStack: 6368 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.480 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:29.481 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927392 kB' 'MemUsed: 4314576 kB' 'SwapCached: 0 kB' 'Active: 447160 kB' 'Inactive: 1464196 kB' 'Active(anon): 127108 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1794712 kB' 'Mapped: 47884 kB' 'AnonPages: 118496 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61728 kB' 'Slab: 136960 kB' 'SReclaimable: 61728 kB' 'SUnreclaim: 75232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.482 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:29.483 node0=1024 expecting 1024 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:29.483 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.742 08:47:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:30.002 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.002 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:30.002 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:30.002 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927488 kB' 'MemAvailable: 9505436 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 447712 kB' 'Inactive: 1464196 kB' 'Active(anon): 127660 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 118772 kB' 'Mapped: 48020 kB' 'Shmem: 10464 kB' 'KReclaimable: 61728 kB' 'Slab: 136956 kB' 'SReclaimable: 61728 kB' 'SUnreclaim: 75228 kB' 'KernelStack: 6360 kB' 'PageTables: 3608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.002 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.267 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927236 kB' 'MemAvailable: 9505184 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 447444 kB' 'Inactive: 1464196 kB' 'Active(anon): 127392 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 118500 kB' 'Mapped: 47884 kB' 'Shmem: 10464 kB' 'KReclaimable: 61728 kB' 'Slab: 136956 kB' 'SReclaimable: 61728 kB' 'SUnreclaim: 75228 kB' 'KernelStack: 6352 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.268 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.269 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927236 kB' 'MemAvailable: 9505184 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 447180 kB' 'Inactive: 1464196 kB' 'Active(anon): 127128 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 118500 kB' 'Mapped: 47884 kB' 'Shmem: 10464 kB' 'KReclaimable: 61728 kB' 'Slab: 136956 kB' 'SReclaimable: 61728 kB' 'SUnreclaim: 75228 kB' 'KernelStack: 6352 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.270 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.271 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:30.272 nr_hugepages=1024 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:30.272 resv_hugepages=0 00:05:30.272 surplus_hugepages=0 00:05:30.272 anon_hugepages=0 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927488 kB' 'MemAvailable: 9505436 kB' 'Buffers: 2436 kB' 'Cached: 1792276 kB' 'SwapCached: 0 kB' 'Active: 447540 kB' 'Inactive: 1464196 kB' 'Active(anon): 127488 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 118644 kB' 'Mapped: 47884 kB' 'Shmem: 10464 kB' 'KReclaimable: 61728 kB' 'Slab: 136956 kB' 'SReclaimable: 61728 kB' 'SUnreclaim: 75228 kB' 'KernelStack: 6336 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.272 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.273 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927792 kB' 'MemUsed: 4314176 kB' 'SwapCached: 0 kB' 'Active: 447428 kB' 'Inactive: 1464196 kB' 'Active(anon): 127376 kB' 'Inactive(anon): 0 kB' 'Active(file): 320052 kB' 'Inactive(file): 1464196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1794712 kB' 'Mapped: 47884 kB' 'AnonPages: 118516 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61728 kB' 'Slab: 136956 kB' 'SReclaimable: 61728 kB' 'SUnreclaim: 75228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.274 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.275 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.276 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.276 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.276 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:30.276 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:30.276 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:30.276 node0=1024 expecting 1024 00:05:30.276 ************************************ 00:05:30.276 END TEST no_shrink_alloc 00:05:30.276 ************************************ 00:05:30.276 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:30.276 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:30.276 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:30.276 08:47:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:30.276 00:05:30.276 real 0m1.438s 00:05:30.276 user 0m0.644s 00:05:30.276 sys 0m0.833s 00:05:30.276 08:47:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.276 08:47:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:30.276 08:47:37 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:30.276 08:47:37 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:30.276 08:47:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:30.276 08:47:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:30.276 08:47:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:30.276 08:47:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:30.276 08:47:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:30.276 08:47:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:30.276 08:47:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:30.276 00:05:30.276 real 0m5.752s 00:05:30.276 user 0m2.600s 00:05:30.276 sys 0m3.289s 00:05:30.276 08:47:37 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.276 08:47:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:30.276 ************************************ 00:05:30.276 END TEST hugepages 00:05:30.276 ************************************ 00:05:30.536 08:47:37 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:30.536 08:47:37 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.536 08:47:37 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.536 08:47:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:30.536 ************************************ 00:05:30.536 START TEST driver 00:05:30.536 ************************************ 00:05:30.536 08:47:37 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:30.536 * Looking for test storage... 00:05:30.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:30.536 08:47:37 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:30.536 08:47:37 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:30.536 08:47:37 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:31.492 08:47:38 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:31.492 08:47:38 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.492 08:47:38 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.492 08:47:38 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:31.492 ************************************ 00:05:31.492 START TEST guess_driver 00:05:31.492 ************************************ 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:31.492 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:31.492 Looking for driver=uio_pci_generic 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:31.492 08:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:31.493 08:47:38 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.493 08:47:38 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:32.090 08:47:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:32.090 08:47:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:32.090 08:47:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:32.090 08:47:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:32.090 08:47:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:32.090 08:47:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:32.350 08:47:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:32.350 08:47:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:32.350 08:47:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:32.350 08:47:39 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:32.350 08:47:39 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:32.350 08:47:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:32.350 08:47:39 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:32.919 00:05:32.919 real 0m1.709s 00:05:32.919 user 0m0.593s 00:05:32.919 sys 0m1.142s 00:05:32.919 08:47:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.919 08:47:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:32.919 ************************************ 00:05:32.919 END TEST guess_driver 00:05:32.919 ************************************ 00:05:32.919 ************************************ 00:05:32.919 END TEST driver 00:05:32.919 ************************************ 00:05:32.919 00:05:32.919 real 0m2.603s 00:05:32.919 user 0m0.911s 00:05:32.919 sys 0m1.811s 00:05:32.919 08:47:40 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.919 08:47:40 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:33.179 08:47:40 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:33.179 08:47:40 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.179 08:47:40 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.179 08:47:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:33.179 ************************************ 00:05:33.179 START TEST devices 00:05:33.179 ************************************ 00:05:33.179 08:47:40 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:33.179 * Looking for test storage... 00:05:33.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:33.179 08:47:40 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:33.179 08:47:40 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:33.179 08:47:40 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:33.179 08:47:40 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:34.117 08:47:41 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:34.117 08:47:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:34.117 08:47:41 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:34.117 08:47:41 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:34.117 08:47:41 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:34.117 08:47:41 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:34.117 08:47:41 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:34.117 08:47:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:34.117 08:47:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:34.117 08:47:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:34.117 08:47:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:34.117 08:47:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:34.117 08:47:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:34.117 08:47:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:34.117 08:47:41 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:34.117 No valid GPT data, bailing 00:05:34.117 08:47:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:34.117 08:47:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:34.118 08:47:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:34.118 08:47:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:34.118 08:47:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:34.118 08:47:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:34.118 08:47:41 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:34.118 08:47:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:34.118 08:47:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:34.118 08:47:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:34.118 08:47:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:34.118 08:47:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:05:34.118 08:47:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:34.118 08:47:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:34.118 08:47:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:34.118 08:47:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:05:34.118 08:47:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:05:34.118 08:47:41 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:05:34.118 No valid GPT data, bailing 00:05:34.118 08:47:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:34.118 08:47:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:34.118 08:47:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:34.118 08:47:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:05:34.118 08:47:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:05:34.118 08:47:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:05:34.118 08:47:41 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:05:34.377 08:47:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:05:34.377 08:47:41 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:05:34.377 No valid GPT data, bailing 00:05:34.377 08:47:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:34.377 08:47:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:34.377 08:47:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:05:34.377 08:47:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:05:34.377 08:47:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:05:34.377 08:47:41 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:34.377 08:47:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:34.377 08:47:41 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:34.377 No valid GPT data, bailing 00:05:34.377 08:47:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:34.377 08:47:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:34.377 08:47:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:34.377 08:47:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:34.377 08:47:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:34.377 08:47:41 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:34.377 08:47:41 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:34.378 08:47:41 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.378 08:47:41 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.378 08:47:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:34.378 ************************************ 00:05:34.378 START TEST nvme_mount 00:05:34.378 ************************************ 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:34.378 08:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:35.755 Creating new GPT entries in memory. 00:05:35.755 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:35.755 other utilities. 00:05:35.755 08:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:35.755 08:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:35.755 08:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:35.755 08:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:35.755 08:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:36.705 Creating new GPT entries in memory. 00:05:36.705 The operation has completed successfully. 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57344 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:36.705 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.964 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:36.964 08:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.964 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:36.964 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.226 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:37.226 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:37.226 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:37.226 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:37.226 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:37.226 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:37.226 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:37.226 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:37.226 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:37.226 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:37.226 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:37.226 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:37.226 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:37.486 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:37.486 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:37.486 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:37.486 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:37.486 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:37.486 08:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:37.486 08:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:37.486 08:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:37.486 08:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:37.487 08:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:37.487 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:37.487 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:37.487 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:37.487 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:37.487 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:37.487 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:37.487 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:37.487 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:37.487 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:37.487 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.487 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:37.487 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:37.487 08:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.487 08:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:37.744 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:37.744 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:37.744 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:37.744 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.744 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:37.744 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.003 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:38.003 08:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.003 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:38.003 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.260 08:47:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:38.582 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:38.582 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:38.582 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:38.582 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.582 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:38.582 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.858 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:38.858 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.858 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:38.858 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.858 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:38.858 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:38.858 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:38.858 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:38.858 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:38.858 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:38.858 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:38.858 08:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:38.858 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:38.858 00:05:38.858 real 0m4.477s 00:05:38.858 user 0m0.774s 00:05:38.858 sys 0m1.439s 00:05:38.858 08:47:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.858 08:47:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:38.858 ************************************ 00:05:38.858 END TEST nvme_mount 00:05:38.858 ************************************ 00:05:38.858 08:47:45 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:38.858 08:47:45 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.858 08:47:45 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.858 08:47:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:38.858 ************************************ 00:05:38.858 START TEST dm_mount 00:05:38.858 ************************************ 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:38.858 08:47:45 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:40.233 Creating new GPT entries in memory. 00:05:40.233 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:40.233 other utilities. 00:05:40.233 08:47:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:40.233 08:47:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:40.233 08:47:46 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:40.233 08:47:46 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:40.233 08:47:46 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:41.166 Creating new GPT entries in memory. 00:05:41.166 The operation has completed successfully. 00:05:41.166 08:47:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:41.166 08:47:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:41.166 08:47:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:41.166 08:47:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:41.166 08:47:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:42.103 The operation has completed successfully. 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57787 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.103 08:47:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:42.361 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:42.361 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:42.361 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:42.361 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.361 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:42.361 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.619 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:42.619 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.619 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:42.619 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.878 08:47:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:43.136 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.136 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:43.136 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:43.136 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.136 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.136 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:43.395 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:43.395 00:05:43.395 real 0m4.561s 00:05:43.395 user 0m0.576s 00:05:43.395 sys 0m0.951s 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.395 08:47:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:43.395 ************************************ 00:05:43.395 END TEST dm_mount 00:05:43.395 ************************************ 00:05:43.654 08:47:50 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:43.654 08:47:50 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:43.654 08:47:50 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.654 08:47:50 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:43.654 08:47:50 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:43.654 08:47:50 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:43.654 08:47:50 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:43.918 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:43.918 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:43.918 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:43.918 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:43.918 08:47:50 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:43.918 08:47:50 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:43.918 08:47:50 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:43.918 08:47:50 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:43.918 08:47:50 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:43.918 08:47:50 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:43.918 08:47:50 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:43.918 00:05:43.918 real 0m10.770s 00:05:43.918 user 0m2.018s 00:05:43.918 sys 0m3.189s 00:05:43.918 08:47:50 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.918 08:47:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:43.918 ************************************ 00:05:43.918 END TEST devices 00:05:43.918 ************************************ 00:05:43.918 ************************************ 00:05:43.918 END TEST setup.sh 00:05:43.918 ************************************ 00:05:43.918 00:05:43.918 real 0m24.825s 00:05:43.918 user 0m7.795s 00:05:43.918 sys 0m11.768s 00:05:43.918 08:47:50 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.918 08:47:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:43.918 08:47:50 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:44.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.868 Hugepages 00:05:44.868 node hugesize free / total 00:05:44.868 node0 1048576kB 0 / 0 00:05:44.868 node0 2048kB 2048 / 2048 00:05:44.868 00:05:44.868 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:44.868 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:44.868 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:44.868 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:44.868 08:47:51 -- spdk/autotest.sh@130 -- # uname -s 00:05:45.128 08:47:51 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:45.128 08:47:51 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:45.128 08:47:51 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.697 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:45.956 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:45.956 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:45.956 08:47:52 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:46.892 08:47:53 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:46.892 08:47:53 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:46.892 08:47:53 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:46.892 08:47:53 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:46.892 08:47:53 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:46.892 08:47:53 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:46.892 08:47:53 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:46.892 08:47:53 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:46.892 08:47:53 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:47.151 08:47:54 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:47.151 08:47:54 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:47.151 08:47:54 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:47.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.669 Waiting for block devices as requested 00:05:47.669 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:47.669 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:47.669 08:47:54 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:47.669 08:47:54 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:47.669 08:47:54 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:47.669 08:47:54 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:47.669 08:47:54 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:47.669 08:47:54 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:47.669 08:47:54 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:47.669 08:47:54 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:47.669 08:47:54 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:47.669 08:47:54 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:47.669 08:47:54 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:47.669 08:47:54 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:47.669 08:47:54 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:47.669 08:47:54 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:47.669 08:47:54 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:47.669 08:47:54 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:47.669 08:47:54 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:47.669 08:47:54 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:47.669 08:47:54 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:47.669 08:47:54 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:47.669 08:47:54 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:47.669 08:47:54 -- common/autotest_common.sh@1557 -- # continue 00:05:47.669 08:47:54 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:47.669 08:47:54 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:47.928 08:47:54 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:47.929 08:47:54 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:47.929 08:47:54 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:47.929 08:47:54 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:47.929 08:47:54 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:47.929 08:47:54 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:47.929 08:47:54 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:47.929 08:47:54 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:47.929 08:47:54 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:47.929 08:47:54 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:47.929 08:47:54 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:47.929 08:47:54 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:47.929 08:47:54 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:47.929 08:47:54 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:47.929 08:47:54 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:47.929 08:47:54 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:47.929 08:47:54 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:47.929 08:47:54 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:47.929 08:47:54 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:47.929 08:47:54 -- common/autotest_common.sh@1557 -- # continue 00:05:47.929 08:47:54 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:47.929 08:47:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:47.929 08:47:54 -- common/autotest_common.sh@10 -- # set +x 00:05:47.929 08:47:54 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:47.929 08:47:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.929 08:47:54 -- common/autotest_common.sh@10 -- # set +x 00:05:47.929 08:47:54 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:48.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:48.867 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:48.867 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:48.867 08:47:55 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:48.867 08:47:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:48.867 08:47:55 -- common/autotest_common.sh@10 -- # set +x 00:05:48.867 08:47:55 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:48.867 08:47:55 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:48.867 08:47:55 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:48.867 08:47:55 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:48.867 08:47:55 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:48.867 08:47:55 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:48.867 08:47:55 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:48.867 08:47:55 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:48.867 08:47:55 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:48.867 08:47:55 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:48.868 08:47:55 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:49.127 08:47:56 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:49.127 08:47:56 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:49.127 08:47:56 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:49.127 08:47:56 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:49.127 08:47:56 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:49.127 08:47:56 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:49.127 08:47:56 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:49.127 08:47:56 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:49.127 08:47:56 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:49.127 08:47:56 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:49.127 08:47:56 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:49.127 08:47:56 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:49.127 08:47:56 -- common/autotest_common.sh@1593 -- # return 0 00:05:49.127 08:47:56 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:49.127 08:47:56 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:49.127 08:47:56 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:49.127 08:47:56 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:49.127 08:47:56 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:49.127 08:47:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:49.127 08:47:56 -- common/autotest_common.sh@10 -- # set +x 00:05:49.127 08:47:56 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:49.127 08:47:56 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:49.127 08:47:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.127 08:47:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.127 08:47:56 -- common/autotest_common.sh@10 -- # set +x 00:05:49.127 ************************************ 00:05:49.127 START TEST env 00:05:49.127 ************************************ 00:05:49.127 08:47:56 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:49.127 * Looking for test storage... 00:05:49.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:49.127 08:47:56 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:49.127 08:47:56 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.127 08:47:56 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.127 08:47:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.127 ************************************ 00:05:49.127 START TEST env_memory 00:05:49.127 ************************************ 00:05:49.127 08:47:56 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:49.127 00:05:49.127 00:05:49.127 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.127 http://cunit.sourceforge.net/ 00:05:49.127 00:05:49.127 00:05:49.127 Suite: memory 00:05:49.386 Test: alloc and free memory map ...[2024-07-25 08:47:56.257994] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:49.386 passed 00:05:49.386 Test: mem map translation ...[2024-07-25 08:47:56.299927] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:49.386 [2024-07-25 08:47:56.300059] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:49.386 [2024-07-25 08:47:56.300199] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:49.386 [2024-07-25 08:47:56.300332] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:49.386 passed 00:05:49.386 Test: mem map registration ...[2024-07-25 08:47:56.366232] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:49.386 [2024-07-25 08:47:56.366343] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:49.386 passed 00:05:49.386 Test: mem map adjacent registrations ...passed 00:05:49.386 00:05:49.386 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.386 suites 1 1 n/a 0 0 00:05:49.386 tests 4 4 4 0 0 00:05:49.386 asserts 152 152 152 0 n/a 00:05:49.386 00:05:49.386 Elapsed time = 0.264 seconds 00:05:49.386 00:05:49.386 real 0m0.309s 00:05:49.386 user 0m0.266s 00:05:49.386 sys 0m0.031s 00:05:49.386 08:47:56 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.386 08:47:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:49.386 ************************************ 00:05:49.386 END TEST env_memory 00:05:49.386 ************************************ 00:05:49.645 08:47:56 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:49.645 08:47:56 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.645 08:47:56 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.645 08:47:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.645 ************************************ 00:05:49.645 START TEST env_vtophys 00:05:49.645 ************************************ 00:05:49.645 08:47:56 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:49.645 EAL: lib.eal log level changed from notice to debug 00:05:49.645 EAL: Detected lcore 0 as core 0 on socket 0 00:05:49.645 EAL: Detected lcore 1 as core 0 on socket 0 00:05:49.645 EAL: Detected lcore 2 as core 0 on socket 0 00:05:49.645 EAL: Detected lcore 3 as core 0 on socket 0 00:05:49.645 EAL: Detected lcore 4 as core 0 on socket 0 00:05:49.645 EAL: Detected lcore 5 as core 0 on socket 0 00:05:49.645 EAL: Detected lcore 6 as core 0 on socket 0 00:05:49.645 EAL: Detected lcore 7 as core 0 on socket 0 00:05:49.645 EAL: Detected lcore 8 as core 0 on socket 0 00:05:49.645 EAL: Detected lcore 9 as core 0 on socket 0 00:05:49.645 EAL: Maximum logical cores by configuration: 128 00:05:49.645 EAL: Detected CPU lcores: 10 00:05:49.645 EAL: Detected NUMA nodes: 1 00:05:49.645 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:49.645 EAL: Detected shared linkage of DPDK 00:05:49.645 EAL: No shared files mode enabled, IPC will be disabled 00:05:49.645 EAL: Selected IOVA mode 'PA' 00:05:49.645 EAL: Probing VFIO support... 00:05:49.645 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:49.645 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:49.645 EAL: Ask a virtual area of 0x2e000 bytes 00:05:49.645 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:49.645 EAL: Setting up physically contiguous memory... 00:05:49.645 EAL: Setting maximum number of open files to 524288 00:05:49.645 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:49.645 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:49.645 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.646 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:49.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.646 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.646 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:49.646 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:49.646 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.646 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:49.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.646 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.646 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:49.646 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:49.646 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.646 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:49.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.646 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.646 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:49.646 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:49.646 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.646 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:49.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.646 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.646 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:49.646 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:49.646 EAL: Hugepages will be freed exactly as allocated. 00:05:49.646 EAL: No shared files mode enabled, IPC is disabled 00:05:49.646 EAL: No shared files mode enabled, IPC is disabled 00:05:49.646 EAL: TSC frequency is ~2290000 KHz 00:05:49.646 EAL: Main lcore 0 is ready (tid=7f1a75a19a40;cpuset=[0]) 00:05:49.646 EAL: Trying to obtain current memory policy. 00:05:49.646 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.646 EAL: Restoring previous memory policy: 0 00:05:49.646 EAL: request: mp_malloc_sync 00:05:49.646 EAL: No shared files mode enabled, IPC is disabled 00:05:49.646 EAL: Heap on socket 0 was expanded by 2MB 00:05:49.646 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:49.646 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:49.646 EAL: Mem event callback 'spdk:(nil)' registered 00:05:49.646 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:49.905 00:05:49.905 00:05:49.905 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.905 http://cunit.sourceforge.net/ 00:05:49.905 00:05:49.905 00:05:49.905 Suite: components_suite 00:05:50.165 Test: vtophys_malloc_test ...passed 00:05:50.165 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:50.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.165 EAL: Restoring previous memory policy: 4 00:05:50.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.165 EAL: request: mp_malloc_sync 00:05:50.165 EAL: No shared files mode enabled, IPC is disabled 00:05:50.165 EAL: Heap on socket 0 was expanded by 4MB 00:05:50.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.165 EAL: request: mp_malloc_sync 00:05:50.165 EAL: No shared files mode enabled, IPC is disabled 00:05:50.165 EAL: Heap on socket 0 was shrunk by 4MB 00:05:50.165 EAL: Trying to obtain current memory policy. 00:05:50.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.165 EAL: Restoring previous memory policy: 4 00:05:50.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.165 EAL: request: mp_malloc_sync 00:05:50.165 EAL: No shared files mode enabled, IPC is disabled 00:05:50.165 EAL: Heap on socket 0 was expanded by 6MB 00:05:50.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.165 EAL: request: mp_malloc_sync 00:05:50.165 EAL: No shared files mode enabled, IPC is disabled 00:05:50.165 EAL: Heap on socket 0 was shrunk by 6MB 00:05:50.165 EAL: Trying to obtain current memory policy. 00:05:50.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.165 EAL: Restoring previous memory policy: 4 00:05:50.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.165 EAL: request: mp_malloc_sync 00:05:50.165 EAL: No shared files mode enabled, IPC is disabled 00:05:50.165 EAL: Heap on socket 0 was expanded by 10MB 00:05:50.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.165 EAL: request: mp_malloc_sync 00:05:50.165 EAL: No shared files mode enabled, IPC is disabled 00:05:50.165 EAL: Heap on socket 0 was shrunk by 10MB 00:05:50.165 EAL: Trying to obtain current memory policy. 00:05:50.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.165 EAL: Restoring previous memory policy: 4 00:05:50.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.165 EAL: request: mp_malloc_sync 00:05:50.165 EAL: No shared files mode enabled, IPC is disabled 00:05:50.165 EAL: Heap on socket 0 was expanded by 18MB 00:05:50.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.165 EAL: request: mp_malloc_sync 00:05:50.165 EAL: No shared files mode enabled, IPC is disabled 00:05:50.165 EAL: Heap on socket 0 was shrunk by 18MB 00:05:50.165 EAL: Trying to obtain current memory policy. 00:05:50.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.165 EAL: Restoring previous memory policy: 4 00:05:50.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.165 EAL: request: mp_malloc_sync 00:05:50.165 EAL: No shared files mode enabled, IPC is disabled 00:05:50.165 EAL: Heap on socket 0 was expanded by 34MB 00:05:50.425 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.425 EAL: request: mp_malloc_sync 00:05:50.425 EAL: No shared files mode enabled, IPC is disabled 00:05:50.425 EAL: Heap on socket 0 was shrunk by 34MB 00:05:50.425 EAL: Trying to obtain current memory policy. 00:05:50.425 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.425 EAL: Restoring previous memory policy: 4 00:05:50.425 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.425 EAL: request: mp_malloc_sync 00:05:50.425 EAL: No shared files mode enabled, IPC is disabled 00:05:50.425 EAL: Heap on socket 0 was expanded by 66MB 00:05:50.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.684 EAL: request: mp_malloc_sync 00:05:50.684 EAL: No shared files mode enabled, IPC is disabled 00:05:50.684 EAL: Heap on socket 0 was shrunk by 66MB 00:05:50.684 EAL: Trying to obtain current memory policy. 00:05:50.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.684 EAL: Restoring previous memory policy: 4 00:05:50.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.684 EAL: request: mp_malloc_sync 00:05:50.684 EAL: No shared files mode enabled, IPC is disabled 00:05:50.684 EAL: Heap on socket 0 was expanded by 130MB 00:05:50.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.944 EAL: request: mp_malloc_sync 00:05:50.944 EAL: No shared files mode enabled, IPC is disabled 00:05:50.944 EAL: Heap on socket 0 was shrunk by 130MB 00:05:51.203 EAL: Trying to obtain current memory policy. 00:05:51.203 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.203 EAL: Restoring previous memory policy: 4 00:05:51.203 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.203 EAL: request: mp_malloc_sync 00:05:51.203 EAL: No shared files mode enabled, IPC is disabled 00:05:51.203 EAL: Heap on socket 0 was expanded by 258MB 00:05:51.773 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.773 EAL: request: mp_malloc_sync 00:05:51.773 EAL: No shared files mode enabled, IPC is disabled 00:05:51.773 EAL: Heap on socket 0 was shrunk by 258MB 00:05:52.342 EAL: Trying to obtain current memory policy. 00:05:52.342 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.342 EAL: Restoring previous memory policy: 4 00:05:52.342 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.342 EAL: request: mp_malloc_sync 00:05:52.342 EAL: No shared files mode enabled, IPC is disabled 00:05:52.342 EAL: Heap on socket 0 was expanded by 514MB 00:05:53.722 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.722 EAL: request: mp_malloc_sync 00:05:53.722 EAL: No shared files mode enabled, IPC is disabled 00:05:53.722 EAL: Heap on socket 0 was shrunk by 514MB 00:05:54.291 EAL: Trying to obtain current memory policy. 00:05:54.291 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.550 EAL: Restoring previous memory policy: 4 00:05:54.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.551 EAL: request: mp_malloc_sync 00:05:54.551 EAL: No shared files mode enabled, IPC is disabled 00:05:54.551 EAL: Heap on socket 0 was expanded by 1026MB 00:05:57.089 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.089 EAL: request: mp_malloc_sync 00:05:57.089 EAL: No shared files mode enabled, IPC is disabled 00:05:57.089 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:58.468 passed 00:05:58.468 00:05:58.468 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.468 suites 1 1 n/a 0 0 00:05:58.468 tests 2 2 2 0 0 00:05:58.468 asserts 5131 5131 5131 0 n/a 00:05:58.468 00:05:58.468 Elapsed time = 8.630 seconds 00:05:58.468 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.468 EAL: request: mp_malloc_sync 00:05:58.468 EAL: No shared files mode enabled, IPC is disabled 00:05:58.468 EAL: Heap on socket 0 was shrunk by 2MB 00:05:58.468 EAL: No shared files mode enabled, IPC is disabled 00:05:58.468 EAL: No shared files mode enabled, IPC is disabled 00:05:58.468 EAL: No shared files mode enabled, IPC is disabled 00:05:58.468 00:05:58.468 real 0m8.933s 00:05:58.468 user 0m7.997s 00:05:58.468 sys 0m0.776s 00:05:58.468 ************************************ 00:05:58.468 END TEST env_vtophys 00:05:58.468 ************************************ 00:05:58.468 08:48:05 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.468 08:48:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:58.468 08:48:05 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:58.468 08:48:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.468 08:48:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.468 08:48:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.468 ************************************ 00:05:58.468 START TEST env_pci 00:05:58.468 ************************************ 00:05:58.468 08:48:05 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:58.469 00:05:58.469 00:05:58.469 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.469 http://cunit.sourceforge.net/ 00:05:58.469 00:05:58.469 00:05:58.469 Suite: pci 00:05:58.728 Test: pci_hook ...[2024-07-25 08:48:05.589265] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 59076 has claimed it 00:05:58.728 EAL: Cannot find device (10000:00:01.0) 00:05:58.728 passed 00:05:58.728 00:05:58.728 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.728 suites 1 1 n/a 0 0 00:05:58.728 tests 1 1 1 0 0 00:05:58.728 asserts 25 25 25 0 n/a 00:05:58.728 00:05:58.728 Elapsed time = 0.005 seconds 00:05:58.728 EAL: Failed to attach device on primary process 00:05:58.728 00:05:58.728 real 0m0.102s 00:05:58.728 user 0m0.050s 00:05:58.728 sys 0m0.051s 00:05:58.728 08:48:05 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.728 08:48:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:58.728 ************************************ 00:05:58.728 END TEST env_pci 00:05:58.728 ************************************ 00:05:58.728 08:48:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:58.728 08:48:05 env -- env/env.sh@15 -- # uname 00:05:58.728 08:48:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:58.728 08:48:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:58.728 08:48:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:58.728 08:48:05 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:58.728 08:48:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.728 08:48:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.728 ************************************ 00:05:58.728 START TEST env_dpdk_post_init 00:05:58.728 ************************************ 00:05:58.728 08:48:05 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:58.728 EAL: Detected CPU lcores: 10 00:05:58.728 EAL: Detected NUMA nodes: 1 00:05:58.728 EAL: Detected shared linkage of DPDK 00:05:58.728 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:58.728 EAL: Selected IOVA mode 'PA' 00:05:58.988 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:58.988 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:58.988 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:58.988 Starting DPDK initialization... 00:05:58.988 Starting SPDK post initialization... 00:05:58.988 SPDK NVMe probe 00:05:58.988 Attaching to 0000:00:10.0 00:05:58.988 Attaching to 0000:00:11.0 00:05:58.988 Attached to 0000:00:10.0 00:05:58.988 Attached to 0000:00:11.0 00:05:58.988 Cleaning up... 00:05:58.988 ************************************ 00:05:58.988 END TEST env_dpdk_post_init 00:05:58.988 ************************************ 00:05:58.988 00:05:58.988 real 0m0.269s 00:05:58.988 user 0m0.086s 00:05:58.988 sys 0m0.084s 00:05:58.988 08:48:05 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.988 08:48:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:58.988 08:48:06 env -- env/env.sh@26 -- # uname 00:05:58.988 08:48:06 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:58.988 08:48:06 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:58.988 08:48:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.988 08:48:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.988 08:48:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.988 ************************************ 00:05:58.988 START TEST env_mem_callbacks 00:05:58.988 ************************************ 00:05:58.988 08:48:06 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:58.988 EAL: Detected CPU lcores: 10 00:05:58.988 EAL: Detected NUMA nodes: 1 00:05:58.988 EAL: Detected shared linkage of DPDK 00:05:59.247 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:59.247 EAL: Selected IOVA mode 'PA' 00:05:59.247 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:59.247 00:05:59.247 00:05:59.247 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.247 http://cunit.sourceforge.net/ 00:05:59.247 00:05:59.247 00:05:59.247 Suite: memory 00:05:59.247 Test: test ... 00:05:59.247 register 0x200000200000 2097152 00:05:59.247 malloc 3145728 00:05:59.247 register 0x200000400000 4194304 00:05:59.247 buf 0x2000004fffc0 len 3145728 PASSED 00:05:59.247 malloc 64 00:05:59.247 buf 0x2000004ffec0 len 64 PASSED 00:05:59.247 malloc 4194304 00:05:59.247 register 0x200000800000 6291456 00:05:59.247 buf 0x2000009fffc0 len 4194304 PASSED 00:05:59.247 free 0x2000004fffc0 3145728 00:05:59.247 free 0x2000004ffec0 64 00:05:59.247 unregister 0x200000400000 4194304 PASSED 00:05:59.247 free 0x2000009fffc0 4194304 00:05:59.247 unregister 0x200000800000 6291456 PASSED 00:05:59.247 malloc 8388608 00:05:59.247 register 0x200000400000 10485760 00:05:59.247 buf 0x2000005fffc0 len 8388608 PASSED 00:05:59.247 free 0x2000005fffc0 8388608 00:05:59.247 unregister 0x200000400000 10485760 PASSED 00:05:59.247 passed 00:05:59.247 00:05:59.247 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.247 suites 1 1 n/a 0 0 00:05:59.247 tests 1 1 1 0 0 00:05:59.247 asserts 15 15 15 0 n/a 00:05:59.247 00:05:59.247 Elapsed time = 0.085 seconds 00:05:59.247 00:05:59.247 real 0m0.279s 00:05:59.247 user 0m0.108s 00:05:59.247 sys 0m0.068s 00:05:59.247 08:48:06 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.247 08:48:06 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:59.247 ************************************ 00:05:59.247 END TEST env_mem_callbacks 00:05:59.248 ************************************ 00:05:59.508 00:05:59.508 real 0m10.329s 00:05:59.508 user 0m8.653s 00:05:59.508 sys 0m1.312s 00:05:59.508 08:48:06 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.508 08:48:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.508 ************************************ 00:05:59.508 END TEST env 00:05:59.508 ************************************ 00:05:59.508 08:48:06 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:59.508 08:48:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.508 08:48:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.508 08:48:06 -- common/autotest_common.sh@10 -- # set +x 00:05:59.508 ************************************ 00:05:59.508 START TEST rpc 00:05:59.508 ************************************ 00:05:59.508 08:48:06 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:59.508 * Looking for test storage... 00:05:59.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:59.508 08:48:06 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59195 00:05:59.508 08:48:06 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:59.508 08:48:06 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.508 08:48:06 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59195 00:05:59.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.508 08:48:06 rpc -- common/autotest_common.sh@831 -- # '[' -z 59195 ']' 00:05:59.508 08:48:06 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.508 08:48:06 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.508 08:48:06 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.508 08:48:06 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.508 08:48:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.809 [2024-07-25 08:48:06.733115] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:59.809 [2024-07-25 08:48:06.733263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59195 ] 00:05:59.809 [2024-07-25 08:48:06.895155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.069 [2024-07-25 08:48:07.148525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:00.069 [2024-07-25 08:48:07.148577] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59195' to capture a snapshot of events at runtime. 00:06:00.069 [2024-07-25 08:48:07.148590] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.069 [2024-07-25 08:48:07.148598] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.069 [2024-07-25 08:48:07.148609] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59195 for offline analysis/debug. 00:06:00.069 [2024-07-25 08:48:07.148662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.010 08:48:08 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.010 08:48:08 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:01.010 08:48:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:01.010 08:48:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:01.010 08:48:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:01.010 08:48:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:01.010 08:48:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.010 08:48:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.010 08:48:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.010 ************************************ 00:06:01.010 START TEST rpc_integrity 00:06:01.010 ************************************ 00:06:01.010 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:01.010 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:01.010 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.010 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.010 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.010 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:01.010 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:01.010 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:01.010 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:01.010 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.010 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.269 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.269 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:01.269 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:01.269 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.269 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.269 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.269 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:01.269 { 00:06:01.269 "name": "Malloc0", 00:06:01.269 "aliases": [ 00:06:01.270 "6518da1d-4cac-45c9-b491-e159029a6645" 00:06:01.270 ], 00:06:01.270 "product_name": "Malloc disk", 00:06:01.270 "block_size": 512, 00:06:01.270 "num_blocks": 16384, 00:06:01.270 "uuid": "6518da1d-4cac-45c9-b491-e159029a6645", 00:06:01.270 "assigned_rate_limits": { 00:06:01.270 "rw_ios_per_sec": 0, 00:06:01.270 "rw_mbytes_per_sec": 0, 00:06:01.270 "r_mbytes_per_sec": 0, 00:06:01.270 "w_mbytes_per_sec": 0 00:06:01.270 }, 00:06:01.270 "claimed": false, 00:06:01.270 "zoned": false, 00:06:01.270 "supported_io_types": { 00:06:01.270 "read": true, 00:06:01.270 "write": true, 00:06:01.270 "unmap": true, 00:06:01.270 "flush": true, 00:06:01.270 "reset": true, 00:06:01.270 "nvme_admin": false, 00:06:01.270 "nvme_io": false, 00:06:01.270 "nvme_io_md": false, 00:06:01.270 "write_zeroes": true, 00:06:01.270 "zcopy": true, 00:06:01.270 "get_zone_info": false, 00:06:01.270 "zone_management": false, 00:06:01.270 "zone_append": false, 00:06:01.270 "compare": false, 00:06:01.270 "compare_and_write": false, 00:06:01.270 "abort": true, 00:06:01.270 "seek_hole": false, 00:06:01.270 "seek_data": false, 00:06:01.270 "copy": true, 00:06:01.270 "nvme_iov_md": false 00:06:01.270 }, 00:06:01.270 "memory_domains": [ 00:06:01.270 { 00:06:01.270 "dma_device_id": "system", 00:06:01.270 "dma_device_type": 1 00:06:01.270 }, 00:06:01.270 { 00:06:01.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.270 "dma_device_type": 2 00:06:01.270 } 00:06:01.270 ], 00:06:01.270 "driver_specific": {} 00:06:01.270 } 00:06:01.270 ]' 00:06:01.270 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:01.270 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:01.270 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:01.270 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.270 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.270 [2024-07-25 08:48:08.232874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:01.270 [2024-07-25 08:48:08.232941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:01.270 [2024-07-25 08:48:08.232975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:06:01.270 [2024-07-25 08:48:08.233003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:01.270 [2024-07-25 08:48:08.235559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:01.270 [2024-07-25 08:48:08.235657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:01.270 Passthru0 00:06:01.270 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.270 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:01.270 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.270 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.270 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.270 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:01.270 { 00:06:01.270 "name": "Malloc0", 00:06:01.270 "aliases": [ 00:06:01.270 "6518da1d-4cac-45c9-b491-e159029a6645" 00:06:01.270 ], 00:06:01.270 "product_name": "Malloc disk", 00:06:01.270 "block_size": 512, 00:06:01.270 "num_blocks": 16384, 00:06:01.270 "uuid": "6518da1d-4cac-45c9-b491-e159029a6645", 00:06:01.270 "assigned_rate_limits": { 00:06:01.270 "rw_ios_per_sec": 0, 00:06:01.270 "rw_mbytes_per_sec": 0, 00:06:01.270 "r_mbytes_per_sec": 0, 00:06:01.270 "w_mbytes_per_sec": 0 00:06:01.270 }, 00:06:01.270 "claimed": true, 00:06:01.270 "claim_type": "exclusive_write", 00:06:01.270 "zoned": false, 00:06:01.270 "supported_io_types": { 00:06:01.270 "read": true, 00:06:01.270 "write": true, 00:06:01.270 "unmap": true, 00:06:01.270 "flush": true, 00:06:01.270 "reset": true, 00:06:01.270 "nvme_admin": false, 00:06:01.270 "nvme_io": false, 00:06:01.270 "nvme_io_md": false, 00:06:01.270 "write_zeroes": true, 00:06:01.270 "zcopy": true, 00:06:01.270 "get_zone_info": false, 00:06:01.270 "zone_management": false, 00:06:01.270 "zone_append": false, 00:06:01.270 "compare": false, 00:06:01.270 "compare_and_write": false, 00:06:01.270 "abort": true, 00:06:01.270 "seek_hole": false, 00:06:01.270 "seek_data": false, 00:06:01.270 "copy": true, 00:06:01.270 "nvme_iov_md": false 00:06:01.270 }, 00:06:01.270 "memory_domains": [ 00:06:01.270 { 00:06:01.270 "dma_device_id": "system", 00:06:01.270 "dma_device_type": 1 00:06:01.270 }, 00:06:01.270 { 00:06:01.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.270 "dma_device_type": 2 00:06:01.270 } 00:06:01.270 ], 00:06:01.270 "driver_specific": {} 00:06:01.270 }, 00:06:01.270 { 00:06:01.270 "name": "Passthru0", 00:06:01.270 "aliases": [ 00:06:01.270 "0b36a035-a657-5cf9-b6ff-f11133bb23b7" 00:06:01.270 ], 00:06:01.270 "product_name": "passthru", 00:06:01.270 "block_size": 512, 00:06:01.270 "num_blocks": 16384, 00:06:01.270 "uuid": "0b36a035-a657-5cf9-b6ff-f11133bb23b7", 00:06:01.270 "assigned_rate_limits": { 00:06:01.270 "rw_ios_per_sec": 0, 00:06:01.270 "rw_mbytes_per_sec": 0, 00:06:01.270 "r_mbytes_per_sec": 0, 00:06:01.270 "w_mbytes_per_sec": 0 00:06:01.270 }, 00:06:01.270 "claimed": false, 00:06:01.270 "zoned": false, 00:06:01.270 "supported_io_types": { 00:06:01.270 "read": true, 00:06:01.270 "write": true, 00:06:01.270 "unmap": true, 00:06:01.270 "flush": true, 00:06:01.270 "reset": true, 00:06:01.270 "nvme_admin": false, 00:06:01.270 "nvme_io": false, 00:06:01.270 "nvme_io_md": false, 00:06:01.270 "write_zeroes": true, 00:06:01.270 "zcopy": true, 00:06:01.270 "get_zone_info": false, 00:06:01.270 "zone_management": false, 00:06:01.270 "zone_append": false, 00:06:01.270 "compare": false, 00:06:01.270 "compare_and_write": false, 00:06:01.270 "abort": true, 00:06:01.270 "seek_hole": false, 00:06:01.270 "seek_data": false, 00:06:01.270 "copy": true, 00:06:01.270 "nvme_iov_md": false 00:06:01.270 }, 00:06:01.270 "memory_domains": [ 00:06:01.270 { 00:06:01.270 "dma_device_id": "system", 00:06:01.270 "dma_device_type": 1 00:06:01.270 }, 00:06:01.270 { 00:06:01.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.270 "dma_device_type": 2 00:06:01.270 } 00:06:01.270 ], 00:06:01.270 "driver_specific": { 00:06:01.270 "passthru": { 00:06:01.270 "name": "Passthru0", 00:06:01.270 "base_bdev_name": "Malloc0" 00:06:01.270 } 00:06:01.270 } 00:06:01.270 } 00:06:01.270 ]' 00:06:01.270 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:01.270 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:01.270 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:01.270 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.270 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.270 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.270 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:01.270 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.270 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.270 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.270 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:01.270 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.270 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.270 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.270 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:01.270 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:01.529 ************************************ 00:06:01.529 END TEST rpc_integrity 00:06:01.529 ************************************ 00:06:01.529 08:48:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:01.529 00:06:01.529 real 0m0.369s 00:06:01.529 user 0m0.207s 00:06:01.529 sys 0m0.053s 00:06:01.529 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.529 08:48:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.529 08:48:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:01.529 08:48:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.529 08:48:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.529 08:48:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.529 ************************************ 00:06:01.529 START TEST rpc_plugins 00:06:01.529 ************************************ 00:06:01.529 08:48:08 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:01.529 08:48:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:01.529 08:48:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.529 08:48:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.529 08:48:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.529 08:48:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:01.529 08:48:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:01.529 08:48:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.529 08:48:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.529 08:48:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.529 08:48:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:01.529 { 00:06:01.529 "name": "Malloc1", 00:06:01.529 "aliases": [ 00:06:01.529 "69060f27-648b-4f75-8e32-ba85bb432791" 00:06:01.529 ], 00:06:01.529 "product_name": "Malloc disk", 00:06:01.529 "block_size": 4096, 00:06:01.529 "num_blocks": 256, 00:06:01.529 "uuid": "69060f27-648b-4f75-8e32-ba85bb432791", 00:06:01.529 "assigned_rate_limits": { 00:06:01.529 "rw_ios_per_sec": 0, 00:06:01.529 "rw_mbytes_per_sec": 0, 00:06:01.529 "r_mbytes_per_sec": 0, 00:06:01.529 "w_mbytes_per_sec": 0 00:06:01.529 }, 00:06:01.529 "claimed": false, 00:06:01.529 "zoned": false, 00:06:01.529 "supported_io_types": { 00:06:01.529 "read": true, 00:06:01.529 "write": true, 00:06:01.529 "unmap": true, 00:06:01.529 "flush": true, 00:06:01.529 "reset": true, 00:06:01.529 "nvme_admin": false, 00:06:01.529 "nvme_io": false, 00:06:01.529 "nvme_io_md": false, 00:06:01.529 "write_zeroes": true, 00:06:01.529 "zcopy": true, 00:06:01.529 "get_zone_info": false, 00:06:01.529 "zone_management": false, 00:06:01.529 "zone_append": false, 00:06:01.529 "compare": false, 00:06:01.529 "compare_and_write": false, 00:06:01.529 "abort": true, 00:06:01.529 "seek_hole": false, 00:06:01.529 "seek_data": false, 00:06:01.529 "copy": true, 00:06:01.529 "nvme_iov_md": false 00:06:01.529 }, 00:06:01.529 "memory_domains": [ 00:06:01.529 { 00:06:01.529 "dma_device_id": "system", 00:06:01.529 "dma_device_type": 1 00:06:01.529 }, 00:06:01.529 { 00:06:01.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.529 "dma_device_type": 2 00:06:01.529 } 00:06:01.529 ], 00:06:01.529 "driver_specific": {} 00:06:01.529 } 00:06:01.529 ]' 00:06:01.529 08:48:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:01.529 08:48:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:01.529 08:48:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:01.529 08:48:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.529 08:48:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.529 08:48:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.529 08:48:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:01.529 08:48:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.529 08:48:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.529 08:48:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.529 08:48:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:01.529 08:48:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:01.788 ************************************ 00:06:01.788 END TEST rpc_plugins 00:06:01.788 ************************************ 00:06:01.788 08:48:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:01.788 00:06:01.788 real 0m0.172s 00:06:01.788 user 0m0.096s 00:06:01.788 sys 0m0.031s 00:06:01.788 08:48:08 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.788 08:48:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.788 08:48:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:01.788 08:48:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.788 08:48:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.788 08:48:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.788 ************************************ 00:06:01.788 START TEST rpc_trace_cmd_test 00:06:01.788 ************************************ 00:06:01.788 08:48:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:01.788 08:48:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:01.788 08:48:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:01.788 08:48:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.788 08:48:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.788 08:48:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.788 08:48:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:01.788 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59195", 00:06:01.788 "tpoint_group_mask": "0x8", 00:06:01.788 "iscsi_conn": { 00:06:01.788 "mask": "0x2", 00:06:01.788 "tpoint_mask": "0x0" 00:06:01.788 }, 00:06:01.788 "scsi": { 00:06:01.788 "mask": "0x4", 00:06:01.788 "tpoint_mask": "0x0" 00:06:01.788 }, 00:06:01.788 "bdev": { 00:06:01.788 "mask": "0x8", 00:06:01.788 "tpoint_mask": "0xffffffffffffffff" 00:06:01.788 }, 00:06:01.788 "nvmf_rdma": { 00:06:01.788 "mask": "0x10", 00:06:01.788 "tpoint_mask": "0x0" 00:06:01.788 }, 00:06:01.788 "nvmf_tcp": { 00:06:01.789 "mask": "0x20", 00:06:01.789 "tpoint_mask": "0x0" 00:06:01.789 }, 00:06:01.789 "ftl": { 00:06:01.789 "mask": "0x40", 00:06:01.789 "tpoint_mask": "0x0" 00:06:01.789 }, 00:06:01.789 "blobfs": { 00:06:01.789 "mask": "0x80", 00:06:01.789 "tpoint_mask": "0x0" 00:06:01.789 }, 00:06:01.789 "dsa": { 00:06:01.789 "mask": "0x200", 00:06:01.789 "tpoint_mask": "0x0" 00:06:01.789 }, 00:06:01.789 "thread": { 00:06:01.789 "mask": "0x400", 00:06:01.789 "tpoint_mask": "0x0" 00:06:01.789 }, 00:06:01.789 "nvme_pcie": { 00:06:01.789 "mask": "0x800", 00:06:01.789 "tpoint_mask": "0x0" 00:06:01.789 }, 00:06:01.789 "iaa": { 00:06:01.789 "mask": "0x1000", 00:06:01.789 "tpoint_mask": "0x0" 00:06:01.789 }, 00:06:01.789 "nvme_tcp": { 00:06:01.789 "mask": "0x2000", 00:06:01.789 "tpoint_mask": "0x0" 00:06:01.789 }, 00:06:01.789 "bdev_nvme": { 00:06:01.789 "mask": "0x4000", 00:06:01.789 "tpoint_mask": "0x0" 00:06:01.789 }, 00:06:01.789 "sock": { 00:06:01.789 "mask": "0x8000", 00:06:01.789 "tpoint_mask": "0x0" 00:06:01.789 } 00:06:01.789 }' 00:06:01.789 08:48:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:01.789 08:48:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:01.789 08:48:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:01.789 08:48:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:01.789 08:48:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:01.789 08:48:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:01.789 08:48:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:01.789 08:48:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:02.048 08:48:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:02.048 ************************************ 00:06:02.048 END TEST rpc_trace_cmd_test 00:06:02.048 ************************************ 00:06:02.048 08:48:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:02.048 00:06:02.048 real 0m0.235s 00:06:02.048 user 0m0.196s 00:06:02.048 sys 0m0.029s 00:06:02.048 08:48:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.048 08:48:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.048 08:48:08 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:02.048 08:48:08 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:02.048 08:48:08 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:02.048 08:48:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.048 08:48:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.048 08:48:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.048 ************************************ 00:06:02.048 START TEST rpc_daemon_integrity 00:06:02.048 ************************************ 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:02.048 { 00:06:02.048 "name": "Malloc2", 00:06:02.048 "aliases": [ 00:06:02.048 "499b1aa9-57eb-45cc-930c-86e62362ad01" 00:06:02.048 ], 00:06:02.048 "product_name": "Malloc disk", 00:06:02.048 "block_size": 512, 00:06:02.048 "num_blocks": 16384, 00:06:02.048 "uuid": "499b1aa9-57eb-45cc-930c-86e62362ad01", 00:06:02.048 "assigned_rate_limits": { 00:06:02.048 "rw_ios_per_sec": 0, 00:06:02.048 "rw_mbytes_per_sec": 0, 00:06:02.048 "r_mbytes_per_sec": 0, 00:06:02.048 "w_mbytes_per_sec": 0 00:06:02.048 }, 00:06:02.048 "claimed": false, 00:06:02.048 "zoned": false, 00:06:02.048 "supported_io_types": { 00:06:02.048 "read": true, 00:06:02.048 "write": true, 00:06:02.048 "unmap": true, 00:06:02.048 "flush": true, 00:06:02.048 "reset": true, 00:06:02.048 "nvme_admin": false, 00:06:02.048 "nvme_io": false, 00:06:02.048 "nvme_io_md": false, 00:06:02.048 "write_zeroes": true, 00:06:02.048 "zcopy": true, 00:06:02.048 "get_zone_info": false, 00:06:02.048 "zone_management": false, 00:06:02.048 "zone_append": false, 00:06:02.048 "compare": false, 00:06:02.048 "compare_and_write": false, 00:06:02.048 "abort": true, 00:06:02.048 "seek_hole": false, 00:06:02.048 "seek_data": false, 00:06:02.048 "copy": true, 00:06:02.048 "nvme_iov_md": false 00:06:02.048 }, 00:06:02.048 "memory_domains": [ 00:06:02.048 { 00:06:02.048 "dma_device_id": "system", 00:06:02.048 "dma_device_type": 1 00:06:02.048 }, 00:06:02.048 { 00:06:02.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.048 "dma_device_type": 2 00:06:02.048 } 00:06:02.048 ], 00:06:02.048 "driver_specific": {} 00:06:02.048 } 00:06:02.048 ]' 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.048 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.048 [2024-07-25 08:48:09.163408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:02.048 [2024-07-25 08:48:09.163474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:02.048 [2024-07-25 08:48:09.163504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:06:02.048 [2024-07-25 08:48:09.163514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:02.048 [2024-07-25 08:48:09.165923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:02.048 [2024-07-25 08:48:09.165965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:02.309 Passthru0 00:06:02.309 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.309 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:02.309 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.309 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.309 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.309 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:02.309 { 00:06:02.309 "name": "Malloc2", 00:06:02.309 "aliases": [ 00:06:02.309 "499b1aa9-57eb-45cc-930c-86e62362ad01" 00:06:02.309 ], 00:06:02.309 "product_name": "Malloc disk", 00:06:02.309 "block_size": 512, 00:06:02.309 "num_blocks": 16384, 00:06:02.309 "uuid": "499b1aa9-57eb-45cc-930c-86e62362ad01", 00:06:02.309 "assigned_rate_limits": { 00:06:02.309 "rw_ios_per_sec": 0, 00:06:02.309 "rw_mbytes_per_sec": 0, 00:06:02.309 "r_mbytes_per_sec": 0, 00:06:02.309 "w_mbytes_per_sec": 0 00:06:02.309 }, 00:06:02.309 "claimed": true, 00:06:02.309 "claim_type": "exclusive_write", 00:06:02.309 "zoned": false, 00:06:02.309 "supported_io_types": { 00:06:02.309 "read": true, 00:06:02.309 "write": true, 00:06:02.309 "unmap": true, 00:06:02.309 "flush": true, 00:06:02.309 "reset": true, 00:06:02.309 "nvme_admin": false, 00:06:02.309 "nvme_io": false, 00:06:02.309 "nvme_io_md": false, 00:06:02.309 "write_zeroes": true, 00:06:02.309 "zcopy": true, 00:06:02.309 "get_zone_info": false, 00:06:02.309 "zone_management": false, 00:06:02.309 "zone_append": false, 00:06:02.309 "compare": false, 00:06:02.309 "compare_and_write": false, 00:06:02.309 "abort": true, 00:06:02.309 "seek_hole": false, 00:06:02.309 "seek_data": false, 00:06:02.309 "copy": true, 00:06:02.309 "nvme_iov_md": false 00:06:02.309 }, 00:06:02.309 "memory_domains": [ 00:06:02.309 { 00:06:02.309 "dma_device_id": "system", 00:06:02.309 "dma_device_type": 1 00:06:02.309 }, 00:06:02.309 { 00:06:02.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.309 "dma_device_type": 2 00:06:02.309 } 00:06:02.309 ], 00:06:02.309 "driver_specific": {} 00:06:02.309 }, 00:06:02.309 { 00:06:02.309 "name": "Passthru0", 00:06:02.309 "aliases": [ 00:06:02.309 "dbeaf90c-3ced-5358-b354-3bdde13d6130" 00:06:02.309 ], 00:06:02.309 "product_name": "passthru", 00:06:02.309 "block_size": 512, 00:06:02.309 "num_blocks": 16384, 00:06:02.309 "uuid": "dbeaf90c-3ced-5358-b354-3bdde13d6130", 00:06:02.309 "assigned_rate_limits": { 00:06:02.309 "rw_ios_per_sec": 0, 00:06:02.309 "rw_mbytes_per_sec": 0, 00:06:02.309 "r_mbytes_per_sec": 0, 00:06:02.309 "w_mbytes_per_sec": 0 00:06:02.309 }, 00:06:02.309 "claimed": false, 00:06:02.309 "zoned": false, 00:06:02.309 "supported_io_types": { 00:06:02.309 "read": true, 00:06:02.309 "write": true, 00:06:02.309 "unmap": true, 00:06:02.309 "flush": true, 00:06:02.309 "reset": true, 00:06:02.309 "nvme_admin": false, 00:06:02.309 "nvme_io": false, 00:06:02.309 "nvme_io_md": false, 00:06:02.309 "write_zeroes": true, 00:06:02.309 "zcopy": true, 00:06:02.309 "get_zone_info": false, 00:06:02.309 "zone_management": false, 00:06:02.309 "zone_append": false, 00:06:02.309 "compare": false, 00:06:02.309 "compare_and_write": false, 00:06:02.309 "abort": true, 00:06:02.309 "seek_hole": false, 00:06:02.309 "seek_data": false, 00:06:02.309 "copy": true, 00:06:02.310 "nvme_iov_md": false 00:06:02.310 }, 00:06:02.310 "memory_domains": [ 00:06:02.310 { 00:06:02.310 "dma_device_id": "system", 00:06:02.310 "dma_device_type": 1 00:06:02.310 }, 00:06:02.310 { 00:06:02.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.310 "dma_device_type": 2 00:06:02.310 } 00:06:02.310 ], 00:06:02.310 "driver_specific": { 00:06:02.310 "passthru": { 00:06:02.310 "name": "Passthru0", 00:06:02.310 "base_bdev_name": "Malloc2" 00:06:02.310 } 00:06:02.310 } 00:06:02.310 } 00:06:02.310 ]' 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:02.310 00:06:02.310 real 0m0.350s 00:06:02.310 user 0m0.191s 00:06:02.310 sys 0m0.057s 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.310 08:48:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.310 ************************************ 00:06:02.310 END TEST rpc_daemon_integrity 00:06:02.310 ************************************ 00:06:02.310 08:48:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:02.310 08:48:09 rpc -- rpc/rpc.sh@84 -- # killprocess 59195 00:06:02.310 08:48:09 rpc -- common/autotest_common.sh@950 -- # '[' -z 59195 ']' 00:06:02.310 08:48:09 rpc -- common/autotest_common.sh@954 -- # kill -0 59195 00:06:02.310 08:48:09 rpc -- common/autotest_common.sh@955 -- # uname 00:06:02.310 08:48:09 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.310 08:48:09 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59195 00:06:02.569 killing process with pid 59195 00:06:02.569 08:48:09 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.569 08:48:09 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.569 08:48:09 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59195' 00:06:02.569 08:48:09 rpc -- common/autotest_common.sh@969 -- # kill 59195 00:06:02.569 08:48:09 rpc -- common/autotest_common.sh@974 -- # wait 59195 00:06:05.124 00:06:05.124 real 0m5.618s 00:06:05.124 user 0m6.129s 00:06:05.124 sys 0m0.875s 00:06:05.124 08:48:12 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.124 ************************************ 00:06:05.124 END TEST rpc 00:06:05.124 ************************************ 00:06:05.124 08:48:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.124 08:48:12 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:05.124 08:48:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.124 08:48:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.124 08:48:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.124 ************************************ 00:06:05.124 START TEST skip_rpc 00:06:05.124 ************************************ 00:06:05.124 08:48:12 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:05.124 * Looking for test storage... 00:06:05.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:05.383 08:48:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:05.383 08:48:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:05.383 08:48:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:05.383 08:48:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.383 08:48:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.383 08:48:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.383 ************************************ 00:06:05.383 START TEST skip_rpc 00:06:05.383 ************************************ 00:06:05.383 08:48:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:05.383 08:48:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59422 00:06:05.383 08:48:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:05.383 08:48:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.383 08:48:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:05.383 [2024-07-25 08:48:12.386629] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:05.383 [2024-07-25 08:48:12.386868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59422 ] 00:06:05.642 [2024-07-25 08:48:12.548968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.902 [2024-07-25 08:48:12.810248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59422 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 59422 ']' 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 59422 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59422 00:06:11.204 killing process with pid 59422 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59422' 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 59422 00:06:11.204 08:48:17 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 59422 00:06:13.108 00:06:13.108 real 0m7.696s 00:06:13.108 user 0m7.230s 00:06:13.108 sys 0m0.373s 00:06:13.108 ************************************ 00:06:13.108 END TEST skip_rpc 00:06:13.108 ************************************ 00:06:13.108 08:48:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.108 08:48:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.108 08:48:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:13.108 08:48:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.108 08:48:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.108 08:48:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.108 ************************************ 00:06:13.108 START TEST skip_rpc_with_json 00:06:13.108 ************************************ 00:06:13.108 08:48:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:13.108 08:48:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:13.108 08:48:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59526 00:06:13.108 08:48:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.108 08:48:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.108 08:48:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59526 00:06:13.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.108 08:48:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 59526 ']' 00:06:13.108 08:48:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.108 08:48:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.108 08:48:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.108 08:48:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.108 08:48:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.108 [2024-07-25 08:48:20.149475] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:13.108 [2024-07-25 08:48:20.149641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59526 ] 00:06:13.367 [2024-07-25 08:48:20.311523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.630 [2024-07-25 08:48:20.574915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.568 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.568 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:14.568 08:48:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:14.568 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.568 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.568 [2024-07-25 08:48:21.549919] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:14.568 request: 00:06:14.568 { 00:06:14.568 "trtype": "tcp", 00:06:14.568 "method": "nvmf_get_transports", 00:06:14.568 "req_id": 1 00:06:14.568 } 00:06:14.568 Got JSON-RPC error response 00:06:14.568 response: 00:06:14.568 { 00:06:14.568 "code": -19, 00:06:14.568 "message": "No such device" 00:06:14.568 } 00:06:14.568 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:14.568 08:48:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:14.568 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.568 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.568 [2024-07-25 08:48:21.562039] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.568 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.568 08:48:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:14.568 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.568 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.827 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.827 08:48:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:14.827 { 00:06:14.827 "subsystems": [ 00:06:14.827 { 00:06:14.827 "subsystem": "keyring", 00:06:14.827 "config": [] 00:06:14.827 }, 00:06:14.827 { 00:06:14.827 "subsystem": "iobuf", 00:06:14.827 "config": [ 00:06:14.827 { 00:06:14.827 "method": "iobuf_set_options", 00:06:14.827 "params": { 00:06:14.827 "small_pool_count": 8192, 00:06:14.827 "large_pool_count": 1024, 00:06:14.827 "small_bufsize": 8192, 00:06:14.827 "large_bufsize": 135168 00:06:14.827 } 00:06:14.827 } 00:06:14.827 ] 00:06:14.827 }, 00:06:14.827 { 00:06:14.827 "subsystem": "sock", 00:06:14.827 "config": [ 00:06:14.827 { 00:06:14.827 "method": "sock_set_default_impl", 00:06:14.827 "params": { 00:06:14.827 "impl_name": "posix" 00:06:14.827 } 00:06:14.827 }, 00:06:14.827 { 00:06:14.827 "method": "sock_impl_set_options", 00:06:14.827 "params": { 00:06:14.827 "impl_name": "ssl", 00:06:14.827 "recv_buf_size": 4096, 00:06:14.827 "send_buf_size": 4096, 00:06:14.827 "enable_recv_pipe": true, 00:06:14.827 "enable_quickack": false, 00:06:14.827 "enable_placement_id": 0, 00:06:14.827 "enable_zerocopy_send_server": true, 00:06:14.827 "enable_zerocopy_send_client": false, 00:06:14.827 "zerocopy_threshold": 0, 00:06:14.827 "tls_version": 0, 00:06:14.827 "enable_ktls": false 00:06:14.827 } 00:06:14.827 }, 00:06:14.827 { 00:06:14.827 "method": "sock_impl_set_options", 00:06:14.827 "params": { 00:06:14.827 "impl_name": "posix", 00:06:14.827 "recv_buf_size": 2097152, 00:06:14.827 "send_buf_size": 2097152, 00:06:14.827 "enable_recv_pipe": true, 00:06:14.827 "enable_quickack": false, 00:06:14.827 "enable_placement_id": 0, 00:06:14.827 "enable_zerocopy_send_server": true, 00:06:14.827 "enable_zerocopy_send_client": false, 00:06:14.827 "zerocopy_threshold": 0, 00:06:14.827 "tls_version": 0, 00:06:14.827 "enable_ktls": false 00:06:14.827 } 00:06:14.827 } 00:06:14.827 ] 00:06:14.827 }, 00:06:14.827 { 00:06:14.827 "subsystem": "vmd", 00:06:14.827 "config": [] 00:06:14.827 }, 00:06:14.827 { 00:06:14.827 "subsystem": "accel", 00:06:14.827 "config": [ 00:06:14.827 { 00:06:14.827 "method": "accel_set_options", 00:06:14.827 "params": { 00:06:14.827 "small_cache_size": 128, 00:06:14.827 "large_cache_size": 16, 00:06:14.827 "task_count": 2048, 00:06:14.827 "sequence_count": 2048, 00:06:14.827 "buf_count": 2048 00:06:14.827 } 00:06:14.827 } 00:06:14.827 ] 00:06:14.827 }, 00:06:14.827 { 00:06:14.827 "subsystem": "bdev", 00:06:14.827 "config": [ 00:06:14.827 { 00:06:14.827 "method": "bdev_set_options", 00:06:14.827 "params": { 00:06:14.827 "bdev_io_pool_size": 65535, 00:06:14.827 "bdev_io_cache_size": 256, 00:06:14.827 "bdev_auto_examine": true, 00:06:14.827 "iobuf_small_cache_size": 128, 00:06:14.827 "iobuf_large_cache_size": 16 00:06:14.827 } 00:06:14.827 }, 00:06:14.827 { 00:06:14.827 "method": "bdev_raid_set_options", 00:06:14.827 "params": { 00:06:14.827 "process_window_size_kb": 1024, 00:06:14.827 "process_max_bandwidth_mb_sec": 0 00:06:14.827 } 00:06:14.827 }, 00:06:14.827 { 00:06:14.827 "method": "bdev_iscsi_set_options", 00:06:14.827 "params": { 00:06:14.827 "timeout_sec": 30 00:06:14.827 } 00:06:14.827 }, 00:06:14.827 { 00:06:14.827 "method": "bdev_nvme_set_options", 00:06:14.827 "params": { 00:06:14.827 "action_on_timeout": "none", 00:06:14.827 "timeout_us": 0, 00:06:14.827 "timeout_admin_us": 0, 00:06:14.827 "keep_alive_timeout_ms": 10000, 00:06:14.827 "arbitration_burst": 0, 00:06:14.827 "low_priority_weight": 0, 00:06:14.827 "medium_priority_weight": 0, 00:06:14.827 "high_priority_weight": 0, 00:06:14.827 "nvme_adminq_poll_period_us": 10000, 00:06:14.827 "nvme_ioq_poll_period_us": 0, 00:06:14.827 "io_queue_requests": 0, 00:06:14.827 "delay_cmd_submit": true, 00:06:14.827 "transport_retry_count": 4, 00:06:14.827 "bdev_retry_count": 3, 00:06:14.827 "transport_ack_timeout": 0, 00:06:14.827 "ctrlr_loss_timeout_sec": 0, 00:06:14.827 "reconnect_delay_sec": 0, 00:06:14.828 "fast_io_fail_timeout_sec": 0, 00:06:14.828 "disable_auto_failback": false, 00:06:14.828 "generate_uuids": false, 00:06:14.828 "transport_tos": 0, 00:06:14.828 "nvme_error_stat": false, 00:06:14.828 "rdma_srq_size": 0, 00:06:14.828 "io_path_stat": false, 00:06:14.828 "allow_accel_sequence": false, 00:06:14.828 "rdma_max_cq_size": 0, 00:06:14.828 "rdma_cm_event_timeout_ms": 0, 00:06:14.828 "dhchap_digests": [ 00:06:14.828 "sha256", 00:06:14.828 "sha384", 00:06:14.828 "sha512" 00:06:14.828 ], 00:06:14.828 "dhchap_dhgroups": [ 00:06:14.828 "null", 00:06:14.828 "ffdhe2048", 00:06:14.828 "ffdhe3072", 00:06:14.828 "ffdhe4096", 00:06:14.828 "ffdhe6144", 00:06:14.828 "ffdhe8192" 00:06:14.828 ] 00:06:14.828 } 00:06:14.828 }, 00:06:14.828 { 00:06:14.828 "method": "bdev_nvme_set_hotplug", 00:06:14.828 "params": { 00:06:14.828 "period_us": 100000, 00:06:14.828 "enable": false 00:06:14.828 } 00:06:14.828 }, 00:06:14.828 { 00:06:14.828 "method": "bdev_wait_for_examine" 00:06:14.828 } 00:06:14.828 ] 00:06:14.828 }, 00:06:14.828 { 00:06:14.828 "subsystem": "scsi", 00:06:14.828 "config": null 00:06:14.828 }, 00:06:14.828 { 00:06:14.828 "subsystem": "scheduler", 00:06:14.828 "config": [ 00:06:14.828 { 00:06:14.828 "method": "framework_set_scheduler", 00:06:14.828 "params": { 00:06:14.828 "name": "static" 00:06:14.828 } 00:06:14.828 } 00:06:14.828 ] 00:06:14.828 }, 00:06:14.828 { 00:06:14.828 "subsystem": "vhost_scsi", 00:06:14.828 "config": [] 00:06:14.828 }, 00:06:14.828 { 00:06:14.828 "subsystem": "vhost_blk", 00:06:14.828 "config": [] 00:06:14.828 }, 00:06:14.828 { 00:06:14.828 "subsystem": "ublk", 00:06:14.828 "config": [] 00:06:14.828 }, 00:06:14.828 { 00:06:14.828 "subsystem": "nbd", 00:06:14.828 "config": [] 00:06:14.828 }, 00:06:14.828 { 00:06:14.828 "subsystem": "nvmf", 00:06:14.828 "config": [ 00:06:14.828 { 00:06:14.828 "method": "nvmf_set_config", 00:06:14.828 "params": { 00:06:14.828 "discovery_filter": "match_any", 00:06:14.828 "admin_cmd_passthru": { 00:06:14.828 "identify_ctrlr": false 00:06:14.828 } 00:06:14.828 } 00:06:14.828 }, 00:06:14.828 { 00:06:14.828 "method": "nvmf_set_max_subsystems", 00:06:14.828 "params": { 00:06:14.828 "max_subsystems": 1024 00:06:14.828 } 00:06:14.828 }, 00:06:14.828 { 00:06:14.828 "method": "nvmf_set_crdt", 00:06:14.828 "params": { 00:06:14.828 "crdt1": 0, 00:06:14.828 "crdt2": 0, 00:06:14.828 "crdt3": 0 00:06:14.828 } 00:06:14.828 }, 00:06:14.828 { 00:06:14.828 "method": "nvmf_create_transport", 00:06:14.828 "params": { 00:06:14.828 "trtype": "TCP", 00:06:14.828 "max_queue_depth": 128, 00:06:14.828 "max_io_qpairs_per_ctrlr": 127, 00:06:14.828 "in_capsule_data_size": 4096, 00:06:14.828 "max_io_size": 131072, 00:06:14.828 "io_unit_size": 131072, 00:06:14.828 "max_aq_depth": 128, 00:06:14.828 "num_shared_buffers": 511, 00:06:14.828 "buf_cache_size": 4294967295, 00:06:14.828 "dif_insert_or_strip": false, 00:06:14.828 "zcopy": false, 00:06:14.828 "c2h_success": true, 00:06:14.828 "sock_priority": 0, 00:06:14.828 "abort_timeout_sec": 1, 00:06:14.828 "ack_timeout": 0, 00:06:14.828 "data_wr_pool_size": 0 00:06:14.828 } 00:06:14.828 } 00:06:14.828 ] 00:06:14.828 }, 00:06:14.828 { 00:06:14.828 "subsystem": "iscsi", 00:06:14.828 "config": [ 00:06:14.828 { 00:06:14.828 "method": "iscsi_set_options", 00:06:14.828 "params": { 00:06:14.828 "node_base": "iqn.2016-06.io.spdk", 00:06:14.828 "max_sessions": 128, 00:06:14.828 "max_connections_per_session": 2, 00:06:14.828 "max_queue_depth": 64, 00:06:14.828 "default_time2wait": 2, 00:06:14.828 "default_time2retain": 20, 00:06:14.828 "first_burst_length": 8192, 00:06:14.828 "immediate_data": true, 00:06:14.828 "allow_duplicated_isid": false, 00:06:14.828 "error_recovery_level": 0, 00:06:14.828 "nop_timeout": 60, 00:06:14.828 "nop_in_interval": 30, 00:06:14.828 "disable_chap": false, 00:06:14.828 "require_chap": false, 00:06:14.828 "mutual_chap": false, 00:06:14.828 "chap_group": 0, 00:06:14.828 "max_large_datain_per_connection": 64, 00:06:14.828 "max_r2t_per_connection": 4, 00:06:14.828 "pdu_pool_size": 36864, 00:06:14.828 "immediate_data_pool_size": 16384, 00:06:14.828 "data_out_pool_size": 2048 00:06:14.828 } 00:06:14.828 } 00:06:14.828 ] 00:06:14.828 } 00:06:14.828 ] 00:06:14.828 } 00:06:14.828 08:48:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:14.828 08:48:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59526 00:06:14.828 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59526 ']' 00:06:14.828 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59526 00:06:14.828 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:14.828 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.828 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59526 00:06:14.828 killing process with pid 59526 00:06:14.828 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.828 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.828 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59526' 00:06:14.828 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59526 00:06:14.828 08:48:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59526 00:06:17.362 08:48:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59586 00:06:17.362 08:48:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:17.362 08:48:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:22.687 08:48:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59586 00:06:22.687 08:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59586 ']' 00:06:22.687 08:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59586 00:06:22.687 08:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:22.687 08:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.687 08:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59586 00:06:22.687 killing process with pid 59586 00:06:22.687 08:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.687 08:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.687 08:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59586' 00:06:22.687 08:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59586 00:06:22.687 08:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59586 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:25.222 ************************************ 00:06:25.222 END TEST skip_rpc_with_json 00:06:25.222 ************************************ 00:06:25.222 00:06:25.222 real 0m12.062s 00:06:25.222 user 0m11.485s 00:06:25.222 sys 0m0.832s 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:25.222 08:48:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:25.222 08:48:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.222 08:48:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.222 08:48:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.222 ************************************ 00:06:25.222 START TEST skip_rpc_with_delay 00:06:25.222 ************************************ 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:25.222 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:25.222 [2024-07-25 08:48:32.307058] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:25.222 [2024-07-25 08:48:32.307183] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:25.482 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:25.482 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.482 ************************************ 00:06:25.482 END TEST skip_rpc_with_delay 00:06:25.482 ************************************ 00:06:25.482 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:25.482 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.482 00:06:25.482 real 0m0.232s 00:06:25.482 user 0m0.137s 00:06:25.482 sys 0m0.092s 00:06:25.482 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.482 08:48:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:25.482 08:48:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:25.482 08:48:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:25.482 08:48:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:25.482 08:48:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.482 08:48:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.482 08:48:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.482 ************************************ 00:06:25.482 START TEST exit_on_failed_rpc_init 00:06:25.482 ************************************ 00:06:25.482 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:25.482 08:48:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59721 00:06:25.482 08:48:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.482 08:48:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59721 00:06:25.482 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 59721 ']' 00:06:25.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.482 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.482 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.482 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.482 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.482 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:25.482 [2024-07-25 08:48:32.564378] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:25.482 [2024-07-25 08:48:32.564608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59721 ] 00:06:25.741 [2024-07-25 08:48:32.726368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.000 [2024-07-25 08:48:32.985445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.936 08:48:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.936 08:48:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:26.936 08:48:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.936 08:48:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.936 08:48:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:26.936 08:48:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.936 08:48:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.936 08:48:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.936 08:48:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.936 08:48:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.936 08:48:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.936 08:48:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.936 08:48:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.936 08:48:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:26.936 08:48:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.936 [2024-07-25 08:48:34.034206] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:26.936 [2024-07-25 08:48:34.034443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59745 ] 00:06:27.194 [2024-07-25 08:48:34.185561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.452 [2024-07-25 08:48:34.463571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.452 [2024-07-25 08:48:34.463763] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:27.452 [2024-07-25 08:48:34.463824] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:27.453 [2024-07-25 08:48:34.463857] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.019 08:48:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:28.019 08:48:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.019 08:48:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:28.019 08:48:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:28.019 08:48:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:28.020 08:48:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.020 08:48:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:28.020 08:48:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59721 00:06:28.020 08:48:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 59721 ']' 00:06:28.020 08:48:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 59721 00:06:28.020 08:48:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:28.020 08:48:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.020 08:48:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59721 00:06:28.020 08:48:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.020 08:48:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.020 killing process with pid 59721 00:06:28.020 08:48:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59721' 00:06:28.020 08:48:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 59721 00:06:28.020 08:48:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 59721 00:06:30.557 00:06:30.557 real 0m5.142s 00:06:30.557 user 0m5.853s 00:06:30.557 sys 0m0.567s 00:06:30.557 08:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.557 08:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:30.557 ************************************ 00:06:30.557 END TEST exit_on_failed_rpc_init 00:06:30.557 ************************************ 00:06:30.557 08:48:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:30.557 00:06:30.557 real 0m25.505s 00:06:30.557 user 0m24.822s 00:06:30.557 sys 0m2.132s 00:06:30.557 ************************************ 00:06:30.557 END TEST skip_rpc 00:06:30.557 ************************************ 00:06:30.557 08:48:37 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.557 08:48:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.816 08:48:37 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:30.816 08:48:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.816 08:48:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.816 08:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:30.816 ************************************ 00:06:30.816 START TEST rpc_client 00:06:30.816 ************************************ 00:06:30.816 08:48:37 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:30.816 * Looking for test storage... 00:06:30.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:30.816 08:48:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:30.816 OK 00:06:30.816 08:48:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:30.816 00:06:30.816 real 0m0.198s 00:06:30.816 user 0m0.094s 00:06:30.816 sys 0m0.114s 00:06:30.816 08:48:37 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.816 08:48:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:30.816 ************************************ 00:06:30.816 END TEST rpc_client 00:06:30.816 ************************************ 00:06:31.075 08:48:37 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:31.075 08:48:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.075 08:48:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.075 08:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:31.075 ************************************ 00:06:31.075 START TEST json_config 00:06:31.075 ************************************ 00:06:31.075 08:48:37 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:31.075 08:48:38 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dec25852-ab30-4fdb-92ca-55715b3a612a 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=dec25852-ab30-4fdb-92ca-55715b3a612a 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.075 08:48:38 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:31.075 08:48:38 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.075 08:48:38 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.075 08:48:38 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.075 08:48:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.075 08:48:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.076 08:48:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.076 08:48:38 json_config -- paths/export.sh@5 -- # export PATH 00:06:31.076 08:48:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.076 08:48:38 json_config -- nvmf/common.sh@47 -- # : 0 00:06:31.076 08:48:38 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:31.076 08:48:38 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:31.076 08:48:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.076 08:48:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.076 08:48:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.076 08:48:38 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:31.076 08:48:38 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:31.076 08:48:38 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@11 -- # [[ 1 -eq 1 ]] 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:06:31.076 08:48:38 json_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:31.076 INFO: JSON configuration test init 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:31.076 08:48:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:31.076 08:48:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:31.076 08:48:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:31.076 08:48:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.076 08:48:38 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:31.076 08:48:38 json_config -- json_config/common.sh@9 -- # local app=target 00:06:31.076 08:48:38 json_config -- json_config/common.sh@10 -- # shift 00:06:31.076 08:48:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:31.076 08:48:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:31.076 08:48:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:31.076 08:48:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:31.076 08:48:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:31.076 08:48:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59899 00:06:31.076 08:48:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:31.076 Waiting for target to run... 00:06:31.076 08:48:38 json_config -- json_config/common.sh@25 -- # waitforlisten 59899 /var/tmp/spdk_tgt.sock 00:06:31.076 08:48:38 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:31.076 08:48:38 json_config -- common/autotest_common.sh@831 -- # '[' -z 59899 ']' 00:06:31.076 08:48:38 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:31.076 08:48:38 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.076 08:48:38 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:31.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:31.076 08:48:38 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.076 08:48:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.336 [2024-07-25 08:48:38.221807] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:31.336 [2024-07-25 08:48:38.222044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59899 ] 00:06:31.595 [2024-07-25 08:48:38.599620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.854 [2024-07-25 08:48:38.829607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.112 08:48:39 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.112 08:48:39 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:32.112 08:48:39 json_config -- json_config/common.sh@26 -- # echo '' 00:06:32.112 00:06:32.112 08:48:39 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:32.112 08:48:39 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:32.112 08:48:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:32.112 08:48:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.112 08:48:39 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:32.112 08:48:39 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:32.112 08:48:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:32.112 08:48:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.112 08:48:39 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:32.112 08:48:39 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:32.112 08:48:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:33.492 08:48:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:33.492 08:48:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:33.492 08:48:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@51 -- # sort 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:33.492 08:48:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:33.492 08:48:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@291 -- # create_iscsi_subsystem_config 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@225 -- # timing_enter create_iscsi_subsystem_config 00:06:33.492 08:48:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:33.492 08:48:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.492 08:48:40 json_config -- json_config/json_config.sh@226 -- # tgt_rpc bdev_malloc_create 64 1024 --name MallocForIscsi0 00:06:33.492 08:48:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 64 1024 --name MallocForIscsi0 00:06:33.750 MallocForIscsi0 00:06:33.750 08:48:40 json_config -- json_config/json_config.sh@227 -- # tgt_rpc iscsi_create_portal_group 1 127.0.0.1:3260 00:06:33.750 08:48:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_portal_group 1 127.0.0.1:3260 00:06:34.009 08:48:40 json_config -- json_config/json_config.sh@228 -- # tgt_rpc iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:06:34.009 08:48:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:06:34.268 08:48:41 json_config -- json_config/json_config.sh@229 -- # tgt_rpc iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:06:34.268 08:48:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:06:34.268 08:48:41 json_config -- json_config/json_config.sh@230 -- # timing_exit create_iscsi_subsystem_config 00:06:34.268 08:48:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:34.268 08:48:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.268 08:48:41 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:06:34.268 08:48:41 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:34.268 08:48:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:34.268 08:48:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.528 08:48:41 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:34.528 08:48:41 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:34.528 08:48:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:34.528 MallocBdevForConfigChangeCheck 00:06:34.528 08:48:41 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:34.528 08:48:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:34.528 08:48:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.790 08:48:41 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:34.790 08:48:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:35.049 INFO: shutting down applications... 00:06:35.049 08:48:41 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:35.049 08:48:41 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:35.049 08:48:41 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:35.049 08:48:41 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:35.049 08:48:41 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:35.308 Calling clear_iscsi_subsystem 00:06:35.308 Calling clear_nvmf_subsystem 00:06:35.308 Calling clear_nbd_subsystem 00:06:35.308 Calling clear_ublk_subsystem 00:06:35.308 Calling clear_vhost_blk_subsystem 00:06:35.308 Calling clear_vhost_scsi_subsystem 00:06:35.308 Calling clear_bdev_subsystem 00:06:35.308 08:48:42 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:35.308 08:48:42 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:35.308 08:48:42 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:35.308 08:48:42 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:35.308 08:48:42 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:35.308 08:48:42 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:35.875 08:48:42 json_config -- json_config/json_config.sh@349 -- # break 00:06:35.875 08:48:42 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:35.875 08:48:42 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:35.875 08:48:42 json_config -- json_config/common.sh@31 -- # local app=target 00:06:35.875 08:48:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:35.875 08:48:42 json_config -- json_config/common.sh@35 -- # [[ -n 59899 ]] 00:06:35.875 08:48:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59899 00:06:35.875 08:48:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:35.875 08:48:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:35.875 08:48:42 json_config -- json_config/common.sh@41 -- # kill -0 59899 00:06:35.875 08:48:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:36.134 08:48:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:36.135 08:48:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:36.135 08:48:43 json_config -- json_config/common.sh@41 -- # kill -0 59899 00:06:36.135 08:48:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:36.717 08:48:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:36.717 08:48:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:36.717 08:48:43 json_config -- json_config/common.sh@41 -- # kill -0 59899 00:06:36.717 08:48:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:37.282 08:48:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:37.282 08:48:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.282 08:48:44 json_config -- json_config/common.sh@41 -- # kill -0 59899 00:06:37.282 08:48:44 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:37.282 08:48:44 json_config -- json_config/common.sh@43 -- # break 00:06:37.282 08:48:44 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:37.282 08:48:44 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:37.282 SPDK target shutdown done 00:06:37.282 08:48:44 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:37.282 INFO: relaunching applications... 00:06:37.282 08:48:44 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:37.282 08:48:44 json_config -- json_config/common.sh@9 -- # local app=target 00:06:37.282 08:48:44 json_config -- json_config/common.sh@10 -- # shift 00:06:37.282 08:48:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:37.282 08:48:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:37.282 08:48:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:37.282 08:48:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.282 08:48:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.282 08:48:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60100 00:06:37.282 08:48:44 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:37.282 08:48:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:37.282 Waiting for target to run... 00:06:37.282 08:48:44 json_config -- json_config/common.sh@25 -- # waitforlisten 60100 /var/tmp/spdk_tgt.sock 00:06:37.282 08:48:44 json_config -- common/autotest_common.sh@831 -- # '[' -z 60100 ']' 00:06:37.282 08:48:44 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:37.282 08:48:44 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:37.282 08:48:44 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:37.282 08:48:44 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.282 08:48:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.541 [2024-07-25 08:48:44.411069] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:37.541 [2024-07-25 08:48:44.411209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60100 ] 00:06:37.798 [2024-07-25 08:48:44.785171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.057 [2024-07-25 08:48:45.009498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.432 00:06:39.432 INFO: Checking if target configuration is the same... 00:06:39.432 08:48:46 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.432 08:48:46 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:39.432 08:48:46 json_config -- json_config/common.sh@26 -- # echo '' 00:06:39.432 08:48:46 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:39.432 08:48:46 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:39.432 08:48:46 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:39.432 08:48:46 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:39.432 08:48:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:39.432 + '[' 2 -ne 2 ']' 00:06:39.432 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:39.432 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:39.432 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:39.432 +++ basename /dev/fd/62 00:06:39.433 ++ mktemp /tmp/62.XXX 00:06:39.433 + tmp_file_1=/tmp/62.ImC 00:06:39.433 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:39.433 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:39.433 + tmp_file_2=/tmp/spdk_tgt_config.json.fDB 00:06:39.433 + ret=0 00:06:39.433 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:39.433 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:39.433 + diff -u /tmp/62.ImC /tmp/spdk_tgt_config.json.fDB 00:06:39.433 + echo 'INFO: JSON config files are the same' 00:06:39.433 INFO: JSON config files are the same 00:06:39.433 + rm /tmp/62.ImC /tmp/spdk_tgt_config.json.fDB 00:06:39.433 + exit 0 00:06:39.690 INFO: changing configuration and checking if this can be detected... 00:06:39.690 08:48:46 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:39.690 08:48:46 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:39.690 08:48:46 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:39.690 08:48:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:39.690 08:48:46 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:39.690 08:48:46 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:39.690 08:48:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:39.690 + '[' 2 -ne 2 ']' 00:06:39.690 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:39.690 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:39.690 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:39.690 +++ basename /dev/fd/62 00:06:39.690 ++ mktemp /tmp/62.XXX 00:06:39.690 + tmp_file_1=/tmp/62.FJ2 00:06:39.690 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:39.690 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:39.690 + tmp_file_2=/tmp/spdk_tgt_config.json.YX6 00:06:39.690 + ret=0 00:06:39.690 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:40.257 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:40.257 + diff -u /tmp/62.FJ2 /tmp/spdk_tgt_config.json.YX6 00:06:40.257 + ret=1 00:06:40.257 + echo '=== Start of file: /tmp/62.FJ2 ===' 00:06:40.257 + cat /tmp/62.FJ2 00:06:40.257 + echo '=== End of file: /tmp/62.FJ2 ===' 00:06:40.257 + echo '' 00:06:40.257 + echo '=== Start of file: /tmp/spdk_tgt_config.json.YX6 ===' 00:06:40.257 + cat /tmp/spdk_tgt_config.json.YX6 00:06:40.257 + echo '=== End of file: /tmp/spdk_tgt_config.json.YX6 ===' 00:06:40.257 + echo '' 00:06:40.257 + rm /tmp/62.FJ2 /tmp/spdk_tgt_config.json.YX6 00:06:40.257 + exit 1 00:06:40.257 INFO: configuration change detected. 00:06:40.257 08:48:47 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:40.257 08:48:47 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:40.257 08:48:47 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:40.257 08:48:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:40.257 08:48:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.257 08:48:47 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:40.257 08:48:47 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:40.257 08:48:47 json_config -- json_config/json_config.sh@321 -- # [[ -n 60100 ]] 00:06:40.257 08:48:47 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:40.257 08:48:47 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:40.257 08:48:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:40.257 08:48:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.257 08:48:47 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:40.257 08:48:47 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:40.257 08:48:47 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:40.257 08:48:47 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:40.257 08:48:47 json_config -- json_config/json_config.sh@201 -- # [[ 1 -eq 1 ]] 00:06:40.257 08:48:47 json_config -- json_config/json_config.sh@202 -- # rbd_cleanup 00:06:40.257 08:48:47 json_config -- common/autotest_common.sh@1033 -- # hash ceph 00:06:40.257 08:48:47 json_config -- common/autotest_common.sh@1034 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:06:40.257 + base_dir=/var/tmp/ceph 00:06:40.257 + image=/var/tmp/ceph/ceph_raw.img 00:06:40.257 + dev=/dev/loop200 00:06:40.257 + pkill -9 ceph 00:06:40.257 + sleep 3 00:06:43.604 + umount /dev/loop200p2 00:06:43.604 umount: /dev/loop200p2: no mount point specified. 00:06:43.604 + losetup -d /dev/loop200 00:06:43.604 losetup: /dev/loop200: failed to use device: No such device 00:06:43.604 + rm -rf /var/tmp/ceph 00:06:43.604 08:48:50 json_config -- common/autotest_common.sh@1035 -- # rm -f /var/tmp/ceph_raw.img 00:06:43.604 08:48:50 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:43.604 08:48:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:43.604 08:48:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.604 08:48:50 json_config -- json_config/json_config.sh@327 -- # killprocess 60100 00:06:43.604 08:48:50 json_config -- common/autotest_common.sh@950 -- # '[' -z 60100 ']' 00:06:43.604 08:48:50 json_config -- common/autotest_common.sh@954 -- # kill -0 60100 00:06:43.604 08:48:50 json_config -- common/autotest_common.sh@955 -- # uname 00:06:43.604 08:48:50 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.604 08:48:50 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60100 00:06:43.604 killing process with pid 60100 00:06:43.604 08:48:50 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.604 08:48:50 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.604 08:48:50 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60100' 00:06:43.604 08:48:50 json_config -- common/autotest_common.sh@969 -- # kill 60100 00:06:43.604 08:48:50 json_config -- common/autotest_common.sh@974 -- # wait 60100 00:06:44.543 08:48:51 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:44.543 08:48:51 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:44.543 08:48:51 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:44.543 08:48:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.543 INFO: Success 00:06:44.543 08:48:51 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:44.543 08:48:51 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:44.543 ************************************ 00:06:44.543 END TEST json_config 00:06:44.543 ************************************ 00:06:44.543 00:06:44.543 real 0m13.451s 00:06:44.543 user 0m15.136s 00:06:44.543 sys 0m1.914s 00:06:44.543 08:48:51 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.543 08:48:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.543 08:48:51 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:44.543 08:48:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.543 08:48:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.543 08:48:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.543 ************************************ 00:06:44.543 START TEST json_config_extra_key 00:06:44.543 ************************************ 00:06:44.543 08:48:51 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:44.543 08:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:44.543 08:48:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:44.543 08:48:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.543 08:48:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.543 08:48:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.543 08:48:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.543 08:48:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.543 08:48:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.543 08:48:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.543 08:48:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.543 08:48:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.543 08:48:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.543 08:48:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dec25852-ab30-4fdb-92ca-55715b3a612a 00:06:44.544 08:48:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=dec25852-ab30-4fdb-92ca-55715b3a612a 00:06:44.544 08:48:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.544 08:48:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.544 08:48:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:44.544 08:48:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.544 08:48:51 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:44.544 08:48:51 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.544 08:48:51 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.544 08:48:51 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.544 08:48:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.544 08:48:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.544 08:48:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.544 08:48:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:44.544 08:48:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.544 08:48:51 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:44.544 08:48:51 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:44.544 08:48:51 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:44.544 08:48:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.544 08:48:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.544 08:48:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.544 08:48:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:44.544 08:48:51 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:44.544 08:48:51 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:44.544 08:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:44.544 08:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:44.544 08:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:44.544 08:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:44.544 08:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:44.544 08:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:44.544 08:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:44.544 INFO: launching applications... 00:06:44.544 08:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:44.544 08:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:44.544 08:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:44.544 08:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:44.544 08:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:44.544 08:48:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:44.544 08:48:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:44.544 08:48:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:44.544 08:48:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:44.544 08:48:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:44.544 08:48:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:44.544 08:48:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:44.544 Waiting for target to run... 00:06:44.544 08:48:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=60299 00:06:44.544 08:48:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:44.544 08:48:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 60299 /var/tmp/spdk_tgt.sock 00:06:44.544 08:48:51 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 60299 ']' 00:06:44.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:44.544 08:48:51 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:44.544 08:48:51 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.544 08:48:51 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:44.544 08:48:51 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:44.544 08:48:51 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.544 08:48:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:44.803 [2024-07-25 08:48:51.718630] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:44.803 [2024-07-25 08:48:51.718752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60299 ] 00:06:45.061 [2024-07-25 08:48:52.109560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.319 [2024-07-25 08:48:52.341312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.259 00:06:46.259 INFO: shutting down applications... 00:06:46.259 08:48:53 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.259 08:48:53 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:46.259 08:48:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:46.259 08:48:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:46.259 08:48:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:46.259 08:48:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:46.259 08:48:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:46.259 08:48:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 60299 ]] 00:06:46.259 08:48:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 60299 00:06:46.259 08:48:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:46.259 08:48:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:46.259 08:48:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60299 00:06:46.259 08:48:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:46.517 08:48:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:46.517 08:48:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:46.517 08:48:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60299 00:06:46.517 08:48:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:47.085 08:48:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:47.085 08:48:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.085 08:48:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60299 00:06:47.085 08:48:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:47.651 08:48:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:47.651 08:48:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.651 08:48:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60299 00:06:47.651 08:48:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:48.219 08:48:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:48.219 08:48:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:48.219 08:48:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60299 00:06:48.219 08:48:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:48.787 08:48:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:48.787 08:48:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:48.787 08:48:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60299 00:06:48.787 08:48:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:49.046 08:48:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:49.046 08:48:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:49.046 08:48:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60299 00:06:49.046 08:48:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:49.046 08:48:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:49.046 SPDK target shutdown done 00:06:49.046 08:48:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:49.046 08:48:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:49.046 Success 00:06:49.046 08:48:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:49.046 ************************************ 00:06:49.046 END TEST json_config_extra_key 00:06:49.046 ************************************ 00:06:49.046 00:06:49.046 real 0m4.672s 00:06:49.046 user 0m4.370s 00:06:49.046 sys 0m0.528s 00:06:49.046 08:48:56 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.046 08:48:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:49.305 08:48:56 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:49.305 08:48:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.305 08:48:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.305 08:48:56 -- common/autotest_common.sh@10 -- # set +x 00:06:49.305 ************************************ 00:06:49.305 START TEST alias_rpc 00:06:49.305 ************************************ 00:06:49.305 08:48:56 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:49.305 * Looking for test storage... 00:06:49.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:49.305 08:48:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:49.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.305 08:48:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=60402 00:06:49.305 08:48:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.305 08:48:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 60402 00:06:49.305 08:48:56 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 60402 ']' 00:06:49.306 08:48:56 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.306 08:48:56 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.306 08:48:56 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.306 08:48:56 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.306 08:48:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.565 [2024-07-25 08:48:56.424781] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:49.565 [2024-07-25 08:48:56.424919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60402 ] 00:06:49.565 [2024-07-25 08:48:56.578250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.824 [2024-07-25 08:48:56.822524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.763 08:48:57 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.763 08:48:57 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:50.763 08:48:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:51.022 08:48:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 60402 00:06:51.022 08:48:57 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 60402 ']' 00:06:51.022 08:48:57 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 60402 00:06:51.022 08:48:57 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:51.022 08:48:57 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.022 08:48:57 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60402 00:06:51.022 killing process with pid 60402 00:06:51.022 08:48:58 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.022 08:48:58 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.022 08:48:58 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60402' 00:06:51.022 08:48:58 alias_rpc -- common/autotest_common.sh@969 -- # kill 60402 00:06:51.022 08:48:58 alias_rpc -- common/autotest_common.sh@974 -- # wait 60402 00:06:53.562 ************************************ 00:06:53.562 END TEST alias_rpc 00:06:53.562 ************************************ 00:06:53.562 00:06:53.562 real 0m4.335s 00:06:53.562 user 0m4.367s 00:06:53.562 sys 0m0.522s 00:06:53.562 08:49:00 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.562 08:49:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.562 08:49:00 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:53.562 08:49:00 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:53.562 08:49:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.562 08:49:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.562 08:49:00 -- common/autotest_common.sh@10 -- # set +x 00:06:53.562 ************************************ 00:06:53.562 START TEST spdkcli_tcp 00:06:53.562 ************************************ 00:06:53.562 08:49:00 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:53.821 * Looking for test storage... 00:06:53.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:53.821 08:49:00 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:53.821 08:49:00 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:53.821 08:49:00 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:53.821 08:49:00 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:53.821 08:49:00 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:53.821 08:49:00 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:53.821 08:49:00 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:53.821 08:49:00 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:53.821 08:49:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:53.821 08:49:00 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=60511 00:06:53.821 08:49:00 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:53.821 08:49:00 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 60511 00:06:53.821 08:49:00 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 60511 ']' 00:06:53.821 08:49:00 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.821 08:49:00 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.821 08:49:00 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.821 08:49:00 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.821 08:49:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:53.821 [2024-07-25 08:49:00.836542] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:53.821 [2024-07-25 08:49:00.836770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60511 ] 00:06:54.080 [2024-07-25 08:49:01.002580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.339 [2024-07-25 08:49:01.248584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.339 [2024-07-25 08:49:01.248620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.277 08:49:02 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.277 08:49:02 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:55.277 08:49:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=60528 00:06:55.277 08:49:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:55.277 08:49:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:55.277 [ 00:06:55.277 "bdev_malloc_delete", 00:06:55.277 "bdev_malloc_create", 00:06:55.277 "bdev_null_resize", 00:06:55.277 "bdev_null_delete", 00:06:55.277 "bdev_null_create", 00:06:55.277 "bdev_nvme_cuse_unregister", 00:06:55.277 "bdev_nvme_cuse_register", 00:06:55.277 "bdev_opal_new_user", 00:06:55.277 "bdev_opal_set_lock_state", 00:06:55.277 "bdev_opal_delete", 00:06:55.277 "bdev_opal_get_info", 00:06:55.277 "bdev_opal_create", 00:06:55.277 "bdev_nvme_opal_revert", 00:06:55.277 "bdev_nvme_opal_init", 00:06:55.277 "bdev_nvme_send_cmd", 00:06:55.277 "bdev_nvme_get_path_iostat", 00:06:55.277 "bdev_nvme_get_mdns_discovery_info", 00:06:55.277 "bdev_nvme_stop_mdns_discovery", 00:06:55.277 "bdev_nvme_start_mdns_discovery", 00:06:55.277 "bdev_nvme_set_multipath_policy", 00:06:55.277 "bdev_nvme_set_preferred_path", 00:06:55.277 "bdev_nvme_get_io_paths", 00:06:55.277 "bdev_nvme_remove_error_injection", 00:06:55.277 "bdev_nvme_add_error_injection", 00:06:55.277 "bdev_nvme_get_discovery_info", 00:06:55.277 "bdev_nvme_stop_discovery", 00:06:55.277 "bdev_nvme_start_discovery", 00:06:55.277 "bdev_nvme_get_controller_health_info", 00:06:55.277 "bdev_nvme_disable_controller", 00:06:55.277 "bdev_nvme_enable_controller", 00:06:55.277 "bdev_nvme_reset_controller", 00:06:55.277 "bdev_nvme_get_transport_statistics", 00:06:55.277 "bdev_nvme_apply_firmware", 00:06:55.277 "bdev_nvme_detach_controller", 00:06:55.277 "bdev_nvme_get_controllers", 00:06:55.277 "bdev_nvme_attach_controller", 00:06:55.277 "bdev_nvme_set_hotplug", 00:06:55.277 "bdev_nvme_set_options", 00:06:55.278 "bdev_passthru_delete", 00:06:55.278 "bdev_passthru_create", 00:06:55.278 "bdev_lvol_set_parent_bdev", 00:06:55.278 "bdev_lvol_set_parent", 00:06:55.278 "bdev_lvol_check_shallow_copy", 00:06:55.278 "bdev_lvol_start_shallow_copy", 00:06:55.278 "bdev_lvol_grow_lvstore", 00:06:55.278 "bdev_lvol_get_lvols", 00:06:55.278 "bdev_lvol_get_lvstores", 00:06:55.278 "bdev_lvol_delete", 00:06:55.278 "bdev_lvol_set_read_only", 00:06:55.278 "bdev_lvol_resize", 00:06:55.278 "bdev_lvol_decouple_parent", 00:06:55.278 "bdev_lvol_inflate", 00:06:55.278 "bdev_lvol_rename", 00:06:55.278 "bdev_lvol_clone_bdev", 00:06:55.278 "bdev_lvol_clone", 00:06:55.278 "bdev_lvol_snapshot", 00:06:55.278 "bdev_lvol_create", 00:06:55.278 "bdev_lvol_delete_lvstore", 00:06:55.278 "bdev_lvol_rename_lvstore", 00:06:55.278 "bdev_lvol_create_lvstore", 00:06:55.278 "bdev_raid_set_options", 00:06:55.278 "bdev_raid_remove_base_bdev", 00:06:55.278 "bdev_raid_add_base_bdev", 00:06:55.278 "bdev_raid_delete", 00:06:55.278 "bdev_raid_create", 00:06:55.278 "bdev_raid_get_bdevs", 00:06:55.278 "bdev_error_inject_error", 00:06:55.278 "bdev_error_delete", 00:06:55.278 "bdev_error_create", 00:06:55.278 "bdev_split_delete", 00:06:55.278 "bdev_split_create", 00:06:55.278 "bdev_delay_delete", 00:06:55.278 "bdev_delay_create", 00:06:55.278 "bdev_delay_update_latency", 00:06:55.278 "bdev_zone_block_delete", 00:06:55.278 "bdev_zone_block_create", 00:06:55.278 "blobfs_create", 00:06:55.278 "blobfs_detect", 00:06:55.278 "blobfs_set_cache_size", 00:06:55.278 "bdev_aio_delete", 00:06:55.278 "bdev_aio_rescan", 00:06:55.278 "bdev_aio_create", 00:06:55.278 "bdev_ftl_set_property", 00:06:55.278 "bdev_ftl_get_properties", 00:06:55.278 "bdev_ftl_get_stats", 00:06:55.278 "bdev_ftl_unmap", 00:06:55.278 "bdev_ftl_unload", 00:06:55.278 "bdev_ftl_delete", 00:06:55.278 "bdev_ftl_load", 00:06:55.278 "bdev_ftl_create", 00:06:55.278 "bdev_virtio_attach_controller", 00:06:55.278 "bdev_virtio_scsi_get_devices", 00:06:55.278 "bdev_virtio_detach_controller", 00:06:55.278 "bdev_virtio_blk_set_hotplug", 00:06:55.278 "bdev_iscsi_delete", 00:06:55.278 "bdev_iscsi_create", 00:06:55.278 "bdev_iscsi_set_options", 00:06:55.278 "bdev_rbd_get_clusters_info", 00:06:55.278 "bdev_rbd_unregister_cluster", 00:06:55.278 "bdev_rbd_register_cluster", 00:06:55.278 "bdev_rbd_resize", 00:06:55.278 "bdev_rbd_delete", 00:06:55.278 "bdev_rbd_create", 00:06:55.278 "accel_error_inject_error", 00:06:55.278 "ioat_scan_accel_module", 00:06:55.278 "dsa_scan_accel_module", 00:06:55.278 "iaa_scan_accel_module", 00:06:55.278 "keyring_file_remove_key", 00:06:55.278 "keyring_file_add_key", 00:06:55.278 "keyring_linux_set_options", 00:06:55.278 "iscsi_get_histogram", 00:06:55.278 "iscsi_enable_histogram", 00:06:55.278 "iscsi_set_options", 00:06:55.278 "iscsi_get_auth_groups", 00:06:55.278 "iscsi_auth_group_remove_secret", 00:06:55.278 "iscsi_auth_group_add_secret", 00:06:55.278 "iscsi_delete_auth_group", 00:06:55.278 "iscsi_create_auth_group", 00:06:55.278 "iscsi_set_discovery_auth", 00:06:55.278 "iscsi_get_options", 00:06:55.278 "iscsi_target_node_request_logout", 00:06:55.278 "iscsi_target_node_set_redirect", 00:06:55.278 "iscsi_target_node_set_auth", 00:06:55.278 "iscsi_target_node_add_lun", 00:06:55.278 "iscsi_get_stats", 00:06:55.278 "iscsi_get_connections", 00:06:55.278 "iscsi_portal_group_set_auth", 00:06:55.278 "iscsi_start_portal_group", 00:06:55.278 "iscsi_delete_portal_group", 00:06:55.278 "iscsi_create_portal_group", 00:06:55.278 "iscsi_get_portal_groups", 00:06:55.278 "iscsi_delete_target_node", 00:06:55.278 "iscsi_target_node_remove_pg_ig_maps", 00:06:55.278 "iscsi_target_node_add_pg_ig_maps", 00:06:55.278 "iscsi_create_target_node", 00:06:55.278 "iscsi_get_target_nodes", 00:06:55.278 "iscsi_delete_initiator_group", 00:06:55.278 "iscsi_initiator_group_remove_initiators", 00:06:55.278 "iscsi_initiator_group_add_initiators", 00:06:55.278 "iscsi_create_initiator_group", 00:06:55.278 "iscsi_get_initiator_groups", 00:06:55.278 "nvmf_set_crdt", 00:06:55.278 "nvmf_set_config", 00:06:55.278 "nvmf_set_max_subsystems", 00:06:55.278 "nvmf_stop_mdns_prr", 00:06:55.278 "nvmf_publish_mdns_prr", 00:06:55.278 "nvmf_subsystem_get_listeners", 00:06:55.278 "nvmf_subsystem_get_qpairs", 00:06:55.278 "nvmf_subsystem_get_controllers", 00:06:55.278 "nvmf_get_stats", 00:06:55.278 "nvmf_get_transports", 00:06:55.278 "nvmf_create_transport", 00:06:55.278 "nvmf_get_targets", 00:06:55.278 "nvmf_delete_target", 00:06:55.278 "nvmf_create_target", 00:06:55.278 "nvmf_subsystem_allow_any_host", 00:06:55.278 "nvmf_subsystem_remove_host", 00:06:55.278 "nvmf_subsystem_add_host", 00:06:55.278 "nvmf_ns_remove_host", 00:06:55.278 "nvmf_ns_add_host", 00:06:55.278 "nvmf_subsystem_remove_ns", 00:06:55.278 "nvmf_subsystem_add_ns", 00:06:55.278 "nvmf_subsystem_listener_set_ana_state", 00:06:55.278 "nvmf_discovery_get_referrals", 00:06:55.278 "nvmf_discovery_remove_referral", 00:06:55.278 "nvmf_discovery_add_referral", 00:06:55.278 "nvmf_subsystem_remove_listener", 00:06:55.278 "nvmf_subsystem_add_listener", 00:06:55.278 "nvmf_delete_subsystem", 00:06:55.278 "nvmf_create_subsystem", 00:06:55.278 "nvmf_get_subsystems", 00:06:55.278 "env_dpdk_get_mem_stats", 00:06:55.278 "nbd_get_disks", 00:06:55.278 "nbd_stop_disk", 00:06:55.278 "nbd_start_disk", 00:06:55.278 "ublk_recover_disk", 00:06:55.278 "ublk_get_disks", 00:06:55.278 "ublk_stop_disk", 00:06:55.278 "ublk_start_disk", 00:06:55.278 "ublk_destroy_target", 00:06:55.278 "ublk_create_target", 00:06:55.278 "virtio_blk_create_transport", 00:06:55.278 "virtio_blk_get_transports", 00:06:55.278 "vhost_controller_set_coalescing", 00:06:55.278 "vhost_get_controllers", 00:06:55.278 "vhost_delete_controller", 00:06:55.278 "vhost_create_blk_controller", 00:06:55.278 "vhost_scsi_controller_remove_target", 00:06:55.278 "vhost_scsi_controller_add_target", 00:06:55.278 "vhost_start_scsi_controller", 00:06:55.278 "vhost_create_scsi_controller", 00:06:55.278 "thread_set_cpumask", 00:06:55.278 "framework_get_governor", 00:06:55.278 "framework_get_scheduler", 00:06:55.278 "framework_set_scheduler", 00:06:55.278 "framework_get_reactors", 00:06:55.278 "thread_get_io_channels", 00:06:55.278 "thread_get_pollers", 00:06:55.278 "thread_get_stats", 00:06:55.278 "framework_monitor_context_switch", 00:06:55.278 "spdk_kill_instance", 00:06:55.278 "log_enable_timestamps", 00:06:55.278 "log_get_flags", 00:06:55.278 "log_clear_flag", 00:06:55.278 "log_set_flag", 00:06:55.278 "log_get_level", 00:06:55.278 "log_set_level", 00:06:55.278 "log_get_print_level", 00:06:55.278 "log_set_print_level", 00:06:55.278 "framework_enable_cpumask_locks", 00:06:55.278 "framework_disable_cpumask_locks", 00:06:55.278 "framework_wait_init", 00:06:55.278 "framework_start_init", 00:06:55.278 "scsi_get_devices", 00:06:55.278 "bdev_get_histogram", 00:06:55.278 "bdev_enable_histogram", 00:06:55.278 "bdev_set_qos_limit", 00:06:55.278 "bdev_set_qd_sampling_period", 00:06:55.278 "bdev_get_bdevs", 00:06:55.278 "bdev_reset_iostat", 00:06:55.278 "bdev_get_iostat", 00:06:55.278 "bdev_examine", 00:06:55.278 "bdev_wait_for_examine", 00:06:55.278 "bdev_set_options", 00:06:55.278 "notify_get_notifications", 00:06:55.278 "notify_get_types", 00:06:55.278 "accel_get_stats", 00:06:55.278 "accel_set_options", 00:06:55.278 "accel_set_driver", 00:06:55.278 "accel_crypto_key_destroy", 00:06:55.278 "accel_crypto_keys_get", 00:06:55.278 "accel_crypto_key_create", 00:06:55.278 "accel_assign_opc", 00:06:55.278 "accel_get_module_info", 00:06:55.278 "accel_get_opc_assignments", 00:06:55.278 "vmd_rescan", 00:06:55.278 "vmd_remove_device", 00:06:55.278 "vmd_enable", 00:06:55.278 "sock_get_default_impl", 00:06:55.278 "sock_set_default_impl", 00:06:55.278 "sock_impl_set_options", 00:06:55.278 "sock_impl_get_options", 00:06:55.278 "iobuf_get_stats", 00:06:55.278 "iobuf_set_options", 00:06:55.278 "framework_get_pci_devices", 00:06:55.278 "framework_get_config", 00:06:55.278 "framework_get_subsystems", 00:06:55.278 "trace_get_info", 00:06:55.278 "trace_get_tpoint_group_mask", 00:06:55.278 "trace_disable_tpoint_group", 00:06:55.278 "trace_enable_tpoint_group", 00:06:55.278 "trace_clear_tpoint_mask", 00:06:55.278 "trace_set_tpoint_mask", 00:06:55.278 "keyring_get_keys", 00:06:55.278 "spdk_get_version", 00:06:55.278 "rpc_get_methods" 00:06:55.278 ] 00:06:55.278 08:49:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:55.278 08:49:02 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:55.278 08:49:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:55.278 08:49:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:55.278 08:49:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 60511 00:06:55.278 08:49:02 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 60511 ']' 00:06:55.278 08:49:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 60511 00:06:55.279 08:49:02 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:55.538 08:49:02 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.538 08:49:02 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60511 00:06:55.538 killing process with pid 60511 00:06:55.538 08:49:02 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.538 08:49:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.538 08:49:02 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60511' 00:06:55.538 08:49:02 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 60511 00:06:55.538 08:49:02 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 60511 00:06:58.079 ************************************ 00:06:58.079 END TEST spdkcli_tcp 00:06:58.079 ************************************ 00:06:58.079 00:06:58.079 real 0m4.357s 00:06:58.079 user 0m7.533s 00:06:58.079 sys 0m0.558s 00:06:58.079 08:49:04 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.079 08:49:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.079 08:49:04 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:58.079 08:49:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.079 08:49:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.079 08:49:04 -- common/autotest_common.sh@10 -- # set +x 00:06:58.079 ************************************ 00:06:58.079 START TEST dpdk_mem_utility 00:06:58.079 ************************************ 00:06:58.079 08:49:05 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:58.079 * Looking for test storage... 00:06:58.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:58.079 08:49:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:58.079 08:49:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:58.079 08:49:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60625 00:06:58.079 08:49:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60625 00:06:58.079 08:49:05 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 60625 ']' 00:06:58.079 08:49:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.079 08:49:05 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.079 08:49:05 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.079 08:49:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.079 08:49:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:58.338 [2024-07-25 08:49:05.253659] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:58.338 [2024-07-25 08:49:05.253916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60625 ] 00:06:58.338 [2024-07-25 08:49:05.417552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.597 [2024-07-25 08:49:05.658757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.534 08:49:06 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.534 08:49:06 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:59.534 08:49:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:59.534 08:49:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:59.534 08:49:06 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.534 08:49:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:59.534 { 00:06:59.534 "filename": "/tmp/spdk_mem_dump.txt" 00:06:59.534 } 00:06:59.534 08:49:06 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.534 08:49:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:59.534 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:59.534 1 heaps totaling size 820.000000 MiB 00:06:59.534 size: 820.000000 MiB heap id: 0 00:06:59.534 end heaps---------- 00:06:59.534 8 mempools totaling size 598.116089 MiB 00:06:59.534 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:59.534 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:59.534 size: 84.521057 MiB name: bdev_io_60625 00:06:59.534 size: 51.011292 MiB name: evtpool_60625 00:06:59.534 size: 50.003479 MiB name: msgpool_60625 00:06:59.534 size: 21.763794 MiB name: PDU_Pool 00:06:59.534 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:59.535 size: 0.026123 MiB name: Session_Pool 00:06:59.535 end mempools------- 00:06:59.535 6 memzones totaling size 4.142822 MiB 00:06:59.535 size: 1.000366 MiB name: RG_ring_0_60625 00:06:59.535 size: 1.000366 MiB name: RG_ring_1_60625 00:06:59.535 size: 1.000366 MiB name: RG_ring_4_60625 00:06:59.535 size: 1.000366 MiB name: RG_ring_5_60625 00:06:59.535 size: 0.125366 MiB name: RG_ring_2_60625 00:06:59.535 size: 0.015991 MiB name: RG_ring_3_60625 00:06:59.535 end memzones------- 00:06:59.535 08:49:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:59.795 heap id: 0 total size: 820.000000 MiB number of busy elements: 289 number of free elements: 18 00:06:59.795 list of free elements. size: 18.454224 MiB 00:06:59.795 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:59.795 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:59.795 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:59.795 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:59.795 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:59.795 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:59.795 element at address: 0x200019600000 with size: 0.999084 MiB 00:06:59.795 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:59.795 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:59.795 element at address: 0x200018e00000 with size: 0.959656 MiB 00:06:59.795 element at address: 0x200019900040 with size: 0.936401 MiB 00:06:59.795 element at address: 0x200000200000 with size: 0.830200 MiB 00:06:59.795 element at address: 0x20001b000000 with size: 0.566833 MiB 00:06:59.795 element at address: 0x200019200000 with size: 0.487976 MiB 00:06:59.795 element at address: 0x200019a00000 with size: 0.485413 MiB 00:06:59.795 element at address: 0x200013800000 with size: 0.467651 MiB 00:06:59.795 element at address: 0x200028400000 with size: 0.390442 MiB 00:06:59.795 element at address: 0x200003a00000 with size: 0.351990 MiB 00:06:59.795 list of standard malloc elements. size: 199.281372 MiB 00:06:59.795 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:59.795 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:59.795 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:59.795 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:59.795 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:59.795 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:59.795 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:59.795 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:59.795 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:06:59.795 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:06:59.795 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:06:59.795 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:06:59.795 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:06:59.795 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200013877b80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200013877c80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200013877d80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200013877e80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200013877f80 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200013878080 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200013878180 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200013878280 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200013878380 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200013878480 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200013878580 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x200019abc680 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:06:59.796 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:06:59.797 element at address: 0x200028463f40 with size: 0.000244 MiB 00:06:59.797 element at address: 0x200028464040 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846af80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846b080 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846b180 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846b280 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846b380 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846b480 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846b580 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846b680 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846b780 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846b880 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846b980 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846be80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846c080 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846c180 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846c280 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846c380 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846c480 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846c580 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846c680 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846c780 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846c880 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846c980 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846d080 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846d180 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846d280 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846d380 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846d480 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846d580 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846d680 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846d780 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846d880 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846d980 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846da80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846db80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846de80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846df80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846e080 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846e180 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846e280 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846e380 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846e480 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846e580 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846e680 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846e780 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846e880 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846e980 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846f080 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846f180 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846f280 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846f380 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846f480 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846f580 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846f680 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846f780 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846f880 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846f980 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:06:59.797 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:06:59.797 list of memzone associated elements. size: 602.264404 MiB 00:06:59.797 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:59.797 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:59.797 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:59.797 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:59.797 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:59.797 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_60625_0 00:06:59.797 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:59.797 associated memzone info: size: 48.002930 MiB name: MP_evtpool_60625_0 00:06:59.797 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:59.797 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60625_0 00:06:59.797 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:59.797 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:59.797 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:59.797 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:59.797 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:59.797 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_60625 00:06:59.797 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:59.797 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60625 00:06:59.797 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:59.797 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60625 00:06:59.797 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:59.797 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:59.797 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:59.798 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:59.798 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:59.798 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:59.798 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:59.798 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:59.798 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:59.798 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60625 00:06:59.798 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:59.798 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60625 00:06:59.798 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:59.798 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60625 00:06:59.798 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:59.798 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60625 00:06:59.798 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:59.798 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60625 00:06:59.798 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:06:59.798 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:59.798 element at address: 0x200013878680 with size: 0.500549 MiB 00:06:59.798 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:59.798 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:06:59.798 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:59.798 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:59.798 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60625 00:06:59.798 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:06:59.798 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:59.798 element at address: 0x200028464140 with size: 0.023804 MiB 00:06:59.798 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:59.798 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:59.798 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60625 00:06:59.798 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:06:59.798 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:59.798 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:06:59.798 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60625 00:06:59.798 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:59.798 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60625 00:06:59.798 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:06:59.798 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:59.798 08:49:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:59.798 08:49:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60625 00:06:59.798 08:49:06 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 60625 ']' 00:06:59.798 08:49:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 60625 00:06:59.798 08:49:06 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:59.798 08:49:06 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.798 08:49:06 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60625 00:06:59.798 killing process with pid 60625 00:06:59.798 08:49:06 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.798 08:49:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.798 08:49:06 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60625' 00:06:59.798 08:49:06 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 60625 00:06:59.798 08:49:06 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 60625 00:07:02.334 00:07:02.334 real 0m4.274s 00:07:02.334 user 0m4.221s 00:07:02.334 sys 0m0.501s 00:07:02.334 ************************************ 00:07:02.334 END TEST dpdk_mem_utility 00:07:02.334 ************************************ 00:07:02.334 08:49:09 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.334 08:49:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:02.334 08:49:09 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:02.334 08:49:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.334 08:49:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.334 08:49:09 -- common/autotest_common.sh@10 -- # set +x 00:07:02.334 ************************************ 00:07:02.334 START TEST event 00:07:02.334 ************************************ 00:07:02.334 08:49:09 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:02.334 * Looking for test storage... 00:07:02.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:02.594 08:49:09 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:02.594 08:49:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:02.594 08:49:09 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:02.594 08:49:09 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:02.594 08:49:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.594 08:49:09 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.594 ************************************ 00:07:02.594 START TEST event_perf 00:07:02.594 ************************************ 00:07:02.594 08:49:09 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:02.594 Running I/O for 1 seconds...[2024-07-25 08:49:09.525871] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:02.594 [2024-07-25 08:49:09.526026] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60725 ] 00:07:02.594 [2024-07-25 08:49:09.682182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.854 [2024-07-25 08:49:09.921929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.854 [2024-07-25 08:49:09.922105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.854 [2024-07-25 08:49:09.922236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.854 [2024-07-25 08:49:09.922317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.235 Running I/O for 1 seconds... 00:07:04.235 lcore 0: 191447 00:07:04.235 lcore 1: 191446 00:07:04.235 lcore 2: 191447 00:07:04.235 lcore 3: 191448 00:07:04.235 done. 00:07:04.495 ************************************ 00:07:04.495 END TEST event_perf 00:07:04.495 ************************************ 00:07:04.495 00:07:04.495 real 0m1.889s 00:07:04.495 user 0m4.641s 00:07:04.495 sys 0m0.124s 00:07:04.495 08:49:11 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.495 08:49:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.495 08:49:11 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:04.495 08:49:11 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:04.495 08:49:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.495 08:49:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.495 ************************************ 00:07:04.495 START TEST event_reactor 00:07:04.495 ************************************ 00:07:04.495 08:49:11 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:04.495 [2024-07-25 08:49:11.472566] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:04.496 [2024-07-25 08:49:11.472754] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60770 ] 00:07:04.756 [2024-07-25 08:49:11.628598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.756 [2024-07-25 08:49:11.864408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.663 test_start 00:07:06.663 oneshot 00:07:06.663 tick 100 00:07:06.663 tick 100 00:07:06.663 tick 250 00:07:06.663 tick 100 00:07:06.663 tick 100 00:07:06.663 tick 100 00:07:06.663 tick 250 00:07:06.663 tick 500 00:07:06.663 tick 100 00:07:06.663 tick 100 00:07:06.663 tick 250 00:07:06.663 tick 100 00:07:06.663 tick 100 00:07:06.663 test_end 00:07:06.663 00:07:06.663 real 0m1.871s 00:07:06.663 user 0m1.659s 00:07:06.663 sys 0m0.103s 00:07:06.663 08:49:13 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.663 ************************************ 00:07:06.663 END TEST event_reactor 00:07:06.663 ************************************ 00:07:06.663 08:49:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:06.663 08:49:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:06.663 08:49:13 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:06.663 08:49:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.663 08:49:13 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.663 ************************************ 00:07:06.663 START TEST event_reactor_perf 00:07:06.663 ************************************ 00:07:06.663 08:49:13 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:06.663 [2024-07-25 08:49:13.392037] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:06.663 [2024-07-25 08:49:13.392139] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60812 ] 00:07:06.663 [2024-07-25 08:49:13.553510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.921 [2024-07-25 08:49:13.792179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.294 test_start 00:07:08.294 test_end 00:07:08.294 Performance: 366474 events per second 00:07:08.294 00:07:08.294 real 0m1.874s 00:07:08.294 user 0m1.667s 00:07:08.294 sys 0m0.099s 00:07:08.294 08:49:15 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.294 ************************************ 00:07:08.294 END TEST event_reactor_perf 00:07:08.294 ************************************ 00:07:08.294 08:49:15 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.294 08:49:15 event -- event/event.sh@49 -- # uname -s 00:07:08.294 08:49:15 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:08.294 08:49:15 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:08.294 08:49:15 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.294 08:49:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.294 08:49:15 event -- common/autotest_common.sh@10 -- # set +x 00:07:08.294 ************************************ 00:07:08.294 START TEST event_scheduler 00:07:08.294 ************************************ 00:07:08.294 08:49:15 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:08.554 * Looking for test storage... 00:07:08.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:08.554 08:49:15 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:08.554 08:49:15 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60880 00:07:08.554 08:49:15 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:08.554 08:49:15 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:08.554 08:49:15 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60880 00:07:08.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.554 08:49:15 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 60880 ']' 00:07:08.554 08:49:15 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.554 08:49:15 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.554 08:49:15 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.554 08:49:15 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.554 08:49:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:08.554 [2024-07-25 08:49:15.540364] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:08.554 [2024-07-25 08:49:15.540505] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60880 ] 00:07:08.813 [2024-07-25 08:49:15.705104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.072 [2024-07-25 08:49:15.950844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.072 [2024-07-25 08:49:15.951077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.072 [2024-07-25 08:49:15.951241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.072 [2024-07-25 08:49:15.951268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.331 08:49:16 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.331 08:49:16 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:09.331 08:49:16 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:09.331 08:49:16 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.331 08:49:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:09.331 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:09.332 POWER: Cannot set governor of lcore 0 to userspace 00:07:09.332 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:09.332 POWER: Cannot set governor of lcore 0 to performance 00:07:09.332 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:09.332 POWER: Cannot set governor of lcore 0 to userspace 00:07:09.332 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:09.332 POWER: Cannot set governor of lcore 0 to userspace 00:07:09.332 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:09.332 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:09.332 POWER: Unable to set Power Management Environment for lcore 0 00:07:09.332 [2024-07-25 08:49:16.340521] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:09.332 [2024-07-25 08:49:16.340595] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:09.332 [2024-07-25 08:49:16.340641] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:09.332 [2024-07-25 08:49:16.340683] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:09.332 [2024-07-25 08:49:16.340723] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:09.332 [2024-07-25 08:49:16.340764] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:09.332 08:49:16 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.332 08:49:16 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:09.332 08:49:16 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.332 08:49:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:09.900 [2024-07-25 08:49:16.728928] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:09.900 08:49:16 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.900 08:49:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:09.900 08:49:16 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.900 08:49:16 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.900 08:49:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:09.900 ************************************ 00:07:09.900 START TEST scheduler_create_thread 00:07:09.900 ************************************ 00:07:09.900 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:09.900 08:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:09.900 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.901 2 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.901 3 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.901 4 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.901 5 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.901 6 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.901 7 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.901 8 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.901 9 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.901 10 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.901 08:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.275 08:49:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.276 08:49:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:11.276 08:49:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:11.276 08:49:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.276 08:49:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.652 ************************************ 00:07:12.652 END TEST scheduler_create_thread 00:07:12.652 ************************************ 00:07:12.652 08:49:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.652 00:07:12.652 real 0m2.616s 00:07:12.652 user 0m0.026s 00:07:12.652 sys 0m0.008s 00:07:12.652 08:49:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.652 08:49:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.652 08:49:19 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:12.652 08:49:19 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60880 00:07:12.652 08:49:19 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 60880 ']' 00:07:12.652 08:49:19 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 60880 00:07:12.652 08:49:19 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:12.652 08:49:19 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.652 08:49:19 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60880 00:07:12.652 killing process with pid 60880 00:07:12.652 08:49:19 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:12.652 08:49:19 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:12.652 08:49:19 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60880' 00:07:12.652 08:49:19 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 60880 00:07:12.652 08:49:19 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 60880 00:07:12.911 [2024-07-25 08:49:19.838438] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:14.290 ************************************ 00:07:14.290 END TEST event_scheduler 00:07:14.290 ************************************ 00:07:14.290 00:07:14.290 real 0m5.951s 00:07:14.290 user 0m9.585s 00:07:14.290 sys 0m0.467s 00:07:14.290 08:49:21 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.290 08:49:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:14.290 08:49:21 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:14.290 08:49:21 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:14.290 08:49:21 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.290 08:49:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.290 08:49:21 event -- common/autotest_common.sh@10 -- # set +x 00:07:14.290 ************************************ 00:07:14.290 START TEST app_repeat 00:07:14.290 ************************************ 00:07:14.290 08:49:21 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:14.290 08:49:21 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.290 08:49:21 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.290 08:49:21 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:14.290 08:49:21 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:14.290 08:49:21 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:14.290 08:49:21 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:14.290 08:49:21 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:14.290 08:49:21 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60992 00:07:14.290 08:49:21 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:14.290 08:49:21 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:14.290 Process app_repeat pid: 60992 00:07:14.290 08:49:21 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60992' 00:07:14.290 08:49:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:14.290 spdk_app_start Round 0 00:07:14.290 08:49:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:14.290 08:49:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60992 /var/tmp/spdk-nbd.sock 00:07:14.290 08:49:21 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60992 ']' 00:07:14.290 08:49:21 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:14.290 08:49:21 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.290 08:49:21 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:14.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:14.290 08:49:21 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.290 08:49:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:14.290 [2024-07-25 08:49:21.384401] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:14.290 [2024-07-25 08:49:21.384687] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60992 ] 00:07:14.551 [2024-07-25 08:49:21.531944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.810 [2024-07-25 08:49:21.780070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.810 [2024-07-25 08:49:21.780104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.382 08:49:22 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.382 08:49:22 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:15.382 08:49:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.382 Malloc0 00:07:15.641 08:49:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.900 Malloc1 00:07:15.900 08:49:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.900 08:49:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.900 08:49:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.900 08:49:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:15.900 08:49:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.900 08:49:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:15.900 08:49:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.900 08:49:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.900 08:49:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.900 08:49:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:15.901 08:49:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.901 08:49:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:15.901 08:49:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:15.901 08:49:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:15.901 08:49:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.901 08:49:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:15.901 /dev/nbd0 00:07:15.901 08:49:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:15.901 08:49:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:15.901 08:49:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:15.901 08:49:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:15.901 08:49:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.901 08:49:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.901 08:49:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:15.901 08:49:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:15.901 08:49:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.901 08:49:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.901 08:49:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:16.160 1+0 records in 00:07:16.160 1+0 records out 00:07:16.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197404 s, 20.7 MB/s 00:07:16.160 08:49:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:16.160 08:49:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:16.160 08:49:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:16.160 08:49:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.160 08:49:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:16.160 08:49:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.160 08:49:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.160 08:49:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:16.160 /dev/nbd1 00:07:16.160 08:49:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:16.160 08:49:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:16.160 08:49:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:16.160 08:49:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:16.160 08:49:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:16.160 08:49:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:16.160 08:49:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:16.160 08:49:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:16.160 08:49:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:16.160 08:49:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:16.161 08:49:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:16.161 1+0 records in 00:07:16.161 1+0 records out 00:07:16.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217918 s, 18.8 MB/s 00:07:16.161 08:49:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:16.161 08:49:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:16.161 08:49:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:16.161 08:49:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.161 08:49:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:16.161 08:49:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.161 08:49:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.161 08:49:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.161 08:49:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.161 08:49:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:16.428 { 00:07:16.428 "nbd_device": "/dev/nbd0", 00:07:16.428 "bdev_name": "Malloc0" 00:07:16.428 }, 00:07:16.428 { 00:07:16.428 "nbd_device": "/dev/nbd1", 00:07:16.428 "bdev_name": "Malloc1" 00:07:16.428 } 00:07:16.428 ]' 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:16.428 { 00:07:16.428 "nbd_device": "/dev/nbd0", 00:07:16.428 "bdev_name": "Malloc0" 00:07:16.428 }, 00:07:16.428 { 00:07:16.428 "nbd_device": "/dev/nbd1", 00:07:16.428 "bdev_name": "Malloc1" 00:07:16.428 } 00:07:16.428 ]' 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:16.428 /dev/nbd1' 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:16.428 /dev/nbd1' 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:16.428 256+0 records in 00:07:16.428 256+0 records out 00:07:16.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118872 s, 88.2 MB/s 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.428 08:49:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:16.689 256+0 records in 00:07:16.689 256+0 records out 00:07:16.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245579 s, 42.7 MB/s 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:16.689 256+0 records in 00:07:16.689 256+0 records out 00:07:16.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270365 s, 38.8 MB/s 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.689 08:49:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.948 08:49:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.948 08:49:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.948 08:49:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.948 08:49:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.948 08:49:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.948 08:49:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.948 08:49:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.948 08:49:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.948 08:49:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.948 08:49:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:17.206 08:49:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:17.206 08:49:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:17.206 08:49:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:17.206 08:49:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.206 08:49:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.206 08:49:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:17.206 08:49:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:17.206 08:49:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.206 08:49:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.206 08:49:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.206 08:49:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.206 08:49:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.206 08:49:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.206 08:49:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.465 08:49:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.466 08:49:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.466 08:49:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.466 08:49:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:17.466 08:49:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.466 08:49:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.466 08:49:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:17.466 08:49:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:17.466 08:49:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:17.466 08:49:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:17.725 08:49:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:19.630 [2024-07-25 08:49:26.241385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.630 [2024-07-25 08:49:26.467732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.630 [2024-07-25 08:49:26.467735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.630 [2024-07-25 08:49:26.706314] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:19.630 [2024-07-25 08:49:26.706416] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:21.007 08:49:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:21.007 08:49:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:21.007 spdk_app_start Round 1 00:07:21.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:21.007 08:49:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60992 /var/tmp/spdk-nbd.sock 00:07:21.008 08:49:27 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60992 ']' 00:07:21.008 08:49:27 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:21.008 08:49:27 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.008 08:49:27 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:21.008 08:49:27 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.008 08:49:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:21.008 08:49:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.008 08:49:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:21.008 08:49:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:21.266 Malloc0 00:07:21.266 08:49:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:21.834 Malloc1 00:07:21.834 08:49:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:21.834 /dev/nbd0 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:21.834 08:49:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:21.834 08:49:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:21.834 08:49:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:21.834 08:49:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:21.834 08:49:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:21.834 08:49:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:21.834 08:49:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:21.834 08:49:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:21.834 08:49:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:21.834 08:49:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:21.834 1+0 records in 00:07:21.834 1+0 records out 00:07:21.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357945 s, 11.4 MB/s 00:07:21.834 08:49:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:21.834 08:49:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:21.834 08:49:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:22.094 08:49:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:22.094 08:49:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:22.094 08:49:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.094 08:49:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.094 08:49:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:22.094 /dev/nbd1 00:07:22.094 08:49:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:22.094 08:49:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:22.094 08:49:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:22.094 08:49:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:22.094 08:49:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:22.094 08:49:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:22.094 08:49:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:22.094 08:49:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:22.094 08:49:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:22.094 08:49:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:22.094 08:49:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:22.094 1+0 records in 00:07:22.094 1+0 records out 00:07:22.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324879 s, 12.6 MB/s 00:07:22.094 08:49:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:22.094 08:49:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:22.094 08:49:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:22.094 08:49:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:22.094 08:49:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:22.094 08:49:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.094 08:49:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.094 08:49:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.094 08:49:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.094 08:49:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.352 08:49:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:22.352 { 00:07:22.352 "nbd_device": "/dev/nbd0", 00:07:22.352 "bdev_name": "Malloc0" 00:07:22.352 }, 00:07:22.352 { 00:07:22.352 "nbd_device": "/dev/nbd1", 00:07:22.352 "bdev_name": "Malloc1" 00:07:22.352 } 00:07:22.352 ]' 00:07:22.352 08:49:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:22.352 { 00:07:22.352 "nbd_device": "/dev/nbd0", 00:07:22.352 "bdev_name": "Malloc0" 00:07:22.352 }, 00:07:22.352 { 00:07:22.352 "nbd_device": "/dev/nbd1", 00:07:22.352 "bdev_name": "Malloc1" 00:07:22.352 } 00:07:22.352 ]' 00:07:22.352 08:49:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.352 08:49:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:22.352 /dev/nbd1' 00:07:22.353 08:49:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:22.353 /dev/nbd1' 00:07:22.353 08:49:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.353 08:49:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:22.353 08:49:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:22.353 08:49:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:22.353 08:49:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:22.353 08:49:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:22.353 08:49:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.353 08:49:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.353 08:49:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:22.353 08:49:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:22.353 08:49:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:22.353 08:49:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:22.353 256+0 records in 00:07:22.353 256+0 records out 00:07:22.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157656 s, 66.5 MB/s 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:22.612 256+0 records in 00:07:22.612 256+0 records out 00:07:22.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214264 s, 48.9 MB/s 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:22.612 256+0 records in 00:07:22.612 256+0 records out 00:07:22.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025025 s, 41.9 MB/s 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.612 08:49:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:22.877 08:49:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:22.877 08:49:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:22.877 08:49:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:22.877 08:49:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.877 08:49:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.877 08:49:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:22.877 08:49:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:22.877 08:49:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.877 08:49:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.877 08:49:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:22.877 08:49:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:22.877 08:49:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:22.877 08:49:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:22.878 08:49:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.878 08:49:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.878 08:49:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:23.155 08:49:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:23.155 08:49:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.155 08:49:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.155 08:49:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.155 08:49:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.155 08:49:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:23.155 08:49:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:23.155 08:49:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.155 08:49:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:23.155 08:49:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.155 08:49:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:23.155 08:49:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:23.155 08:49:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:23.155 08:49:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:23.155 08:49:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:23.155 08:49:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:23.155 08:49:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:23.155 08:49:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:23.732 08:49:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:25.109 [2024-07-25 08:49:32.180931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:25.368 [2024-07-25 08:49:32.432810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.368 [2024-07-25 08:49:32.432833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.627 [2024-07-25 08:49:32.668488] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:25.627 [2024-07-25 08:49:32.668596] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:26.565 spdk_app_start Round 2 00:07:26.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:26.565 08:49:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:26.565 08:49:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:26.565 08:49:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60992 /var/tmp/spdk-nbd.sock 00:07:26.565 08:49:33 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60992 ']' 00:07:26.565 08:49:33 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:26.565 08:49:33 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.565 08:49:33 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:26.565 08:49:33 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.565 08:49:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:26.824 08:49:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.824 08:49:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:26.824 08:49:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:27.084 Malloc0 00:07:27.084 08:49:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:27.343 Malloc1 00:07:27.343 08:49:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:27.343 08:49:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.343 08:49:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:27.343 08:49:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:27.343 08:49:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.343 08:49:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:27.343 08:49:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:27.343 08:49:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.343 08:49:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:27.343 08:49:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:27.343 08:49:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.343 08:49:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:27.343 08:49:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:27.343 08:49:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:27.343 08:49:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:27.343 08:49:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:27.603 /dev/nbd0 00:07:27.603 08:49:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:27.603 08:49:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:27.603 08:49:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:27.603 08:49:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:27.603 08:49:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:27.603 08:49:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:27.603 08:49:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:27.603 08:49:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:27.603 08:49:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:27.603 08:49:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:27.603 08:49:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:27.603 1+0 records in 00:07:27.603 1+0 records out 00:07:27.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299264 s, 13.7 MB/s 00:07:27.603 08:49:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:27.603 08:49:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:27.603 08:49:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:27.603 08:49:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:27.603 08:49:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:27.603 08:49:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.603 08:49:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:27.603 08:49:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:27.863 /dev/nbd1 00:07:27.863 08:49:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:27.863 08:49:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:27.863 08:49:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:27.863 08:49:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:27.863 08:49:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:27.863 08:49:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:27.863 08:49:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:27.863 08:49:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:27.863 08:49:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:27.863 08:49:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:27.863 08:49:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:27.863 1+0 records in 00:07:27.863 1+0 records out 00:07:27.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374495 s, 10.9 MB/s 00:07:27.863 08:49:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:27.863 08:49:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:27.863 08:49:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:27.863 08:49:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:27.863 08:49:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:27.863 08:49:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.863 08:49:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:27.863 08:49:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:27.863 08:49:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.863 08:49:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:28.122 { 00:07:28.122 "nbd_device": "/dev/nbd0", 00:07:28.122 "bdev_name": "Malloc0" 00:07:28.122 }, 00:07:28.122 { 00:07:28.122 "nbd_device": "/dev/nbd1", 00:07:28.122 "bdev_name": "Malloc1" 00:07:28.122 } 00:07:28.122 ]' 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:28.122 { 00:07:28.122 "nbd_device": "/dev/nbd0", 00:07:28.122 "bdev_name": "Malloc0" 00:07:28.122 }, 00:07:28.122 { 00:07:28.122 "nbd_device": "/dev/nbd1", 00:07:28.122 "bdev_name": "Malloc1" 00:07:28.122 } 00:07:28.122 ]' 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:28.122 /dev/nbd1' 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:28.122 /dev/nbd1' 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:28.122 256+0 records in 00:07:28.122 256+0 records out 00:07:28.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00892383 s, 118 MB/s 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:28.122 256+0 records in 00:07:28.122 256+0 records out 00:07:28.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233173 s, 45.0 MB/s 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:28.122 256+0 records in 00:07:28.122 256+0 records out 00:07:28.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285553 s, 36.7 MB/s 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:28.122 08:49:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.381 08:49:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:28.641 08:49:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:28.641 08:49:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:28.641 08:49:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:28.641 08:49:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.641 08:49:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.641 08:49:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:28.641 08:49:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:28.641 08:49:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.641 08:49:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:28.641 08:49:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.641 08:49:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.900 08:49:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:28.900 08:49:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.900 08:49:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:28.900 08:49:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:28.900 08:49:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:28.900 08:49:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.900 08:49:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:28.900 08:49:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:28.900 08:49:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:28.900 08:49:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:28.900 08:49:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:28.900 08:49:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:28.900 08:49:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:29.467 08:49:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:30.912 [2024-07-25 08:49:37.802753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:31.170 [2024-07-25 08:49:38.043893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.170 [2024-07-25 08:49:38.043896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.430 [2024-07-25 08:49:38.290912] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:31.430 [2024-07-25 08:49:38.290992] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:32.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:32.366 08:49:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60992 /var/tmp/spdk-nbd.sock 00:07:32.366 08:49:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60992 ']' 00:07:32.366 08:49:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:32.366 08:49:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.366 08:49:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:32.366 08:49:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.366 08:49:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:32.624 08:49:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.624 08:49:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:32.624 08:49:39 event.app_repeat -- event/event.sh@39 -- # killprocess 60992 00:07:32.625 08:49:39 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 60992 ']' 00:07:32.625 08:49:39 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 60992 00:07:32.625 08:49:39 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:32.625 08:49:39 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.625 08:49:39 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60992 00:07:32.625 08:49:39 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:32.625 08:49:39 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:32.625 08:49:39 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60992' 00:07:32.625 killing process with pid 60992 00:07:32.625 08:49:39 event.app_repeat -- common/autotest_common.sh@969 -- # kill 60992 00:07:32.625 08:49:39 event.app_repeat -- common/autotest_common.sh@974 -- # wait 60992 00:07:34.000 spdk_app_start is called in Round 0. 00:07:34.000 Shutdown signal received, stop current app iteration 00:07:34.000 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:34.000 spdk_app_start is called in Round 1. 00:07:34.000 Shutdown signal received, stop current app iteration 00:07:34.000 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:34.000 spdk_app_start is called in Round 2. 00:07:34.000 Shutdown signal received, stop current app iteration 00:07:34.000 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:34.000 spdk_app_start is called in Round 3. 00:07:34.000 Shutdown signal received, stop current app iteration 00:07:34.000 08:49:40 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:34.000 08:49:40 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:34.000 00:07:34.000 real 0m19.639s 00:07:34.000 user 0m40.680s 00:07:34.000 sys 0m2.575s 00:07:34.000 08:49:40 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.000 08:49:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:34.000 ************************************ 00:07:34.000 END TEST app_repeat 00:07:34.000 ************************************ 00:07:34.000 08:49:41 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:34.000 08:49:41 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:34.000 08:49:41 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.000 08:49:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.000 08:49:41 event -- common/autotest_common.sh@10 -- # set +x 00:07:34.000 ************************************ 00:07:34.000 START TEST cpu_locks 00:07:34.000 ************************************ 00:07:34.000 08:49:41 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:34.259 * Looking for test storage... 00:07:34.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:34.259 08:49:41 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:34.259 08:49:41 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:34.259 08:49:41 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:34.259 08:49:41 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:34.259 08:49:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.259 08:49:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.259 08:49:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.259 ************************************ 00:07:34.259 START TEST default_locks 00:07:34.259 ************************************ 00:07:34.259 08:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:34.259 08:49:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61435 00:07:34.259 08:49:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:34.259 08:49:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61435 00:07:34.259 08:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 61435 ']' 00:07:34.259 08:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.259 08:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.259 08:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.259 08:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.259 08:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.259 [2024-07-25 08:49:41.288468] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:34.259 [2024-07-25 08:49:41.288623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61435 ] 00:07:34.518 [2024-07-25 08:49:41.462049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.776 [2024-07-25 08:49:41.732933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.712 08:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.712 08:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:35.712 08:49:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61435 00:07:35.712 08:49:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:35.712 08:49:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61435 00:07:36.280 08:49:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61435 00:07:36.280 08:49:43 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 61435 ']' 00:07:36.280 08:49:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 61435 00:07:36.280 08:49:43 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:36.280 08:49:43 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.280 08:49:43 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61435 00:07:36.280 08:49:43 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:36.280 08:49:43 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:36.280 killing process with pid 61435 00:07:36.280 08:49:43 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61435' 00:07:36.280 08:49:43 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 61435 00:07:36.280 08:49:43 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 61435 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61435 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61435 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 61435 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 61435 ']' 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.567 ERROR: process (pid: 61435) is no longer running 00:07:39.567 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61435) - No such process 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:39.567 00:07:39.567 real 0m4.928s 00:07:39.567 user 0m4.871s 00:07:39.567 sys 0m0.667s 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.567 08:49:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.567 ************************************ 00:07:39.567 END TEST default_locks 00:07:39.567 ************************************ 00:07:39.567 08:49:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:39.567 08:49:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.567 08:49:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.567 08:49:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.567 ************************************ 00:07:39.567 START TEST default_locks_via_rpc 00:07:39.567 ************************************ 00:07:39.567 08:49:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:39.567 08:49:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61523 00:07:39.567 08:49:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:39.567 08:49:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61523 00:07:39.567 08:49:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61523 ']' 00:07:39.567 08:49:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.567 08:49:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.567 08:49:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.567 08:49:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.567 08:49:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.567 [2024-07-25 08:49:46.273631] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:39.567 [2024-07-25 08:49:46.273774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61523 ] 00:07:39.567 [2024-07-25 08:49:46.441346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.826 [2024-07-25 08:49:46.702710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61523 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:40.762 08:49:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61523 00:07:41.021 08:49:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61523 00:07:41.021 08:49:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 61523 ']' 00:07:41.021 08:49:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 61523 00:07:41.021 08:49:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:41.021 08:49:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.021 08:49:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61523 00:07:41.021 08:49:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:41.021 08:49:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:41.021 killing process with pid 61523 00:07:41.021 08:49:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61523' 00:07:41.021 08:49:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 61523 00:07:41.021 08:49:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 61523 00:07:44.306 00:07:44.306 real 0m4.853s 00:07:44.306 user 0m4.768s 00:07:44.306 sys 0m0.668s 00:07:44.306 08:49:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.306 08:49:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.306 ************************************ 00:07:44.306 END TEST default_locks_via_rpc 00:07:44.306 ************************************ 00:07:44.306 08:49:51 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:44.306 08:49:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.306 08:49:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.306 08:49:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.306 ************************************ 00:07:44.306 START TEST non_locking_app_on_locked_coremask 00:07:44.306 ************************************ 00:07:44.306 08:49:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:44.306 08:49:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61603 00:07:44.306 08:49:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61603 /var/tmp/spdk.sock 00:07:44.306 08:49:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:44.306 08:49:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61603 ']' 00:07:44.306 08:49:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.306 08:49:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.306 08:49:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.306 08:49:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.306 08:49:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.306 [2024-07-25 08:49:51.203708] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:44.306 [2024-07-25 08:49:51.203863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61603 ] 00:07:44.306 [2024-07-25 08:49:51.370854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.564 [2024-07-25 08:49:51.616582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.502 08:49:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.502 08:49:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:45.502 08:49:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61624 00:07:45.502 08:49:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61624 /var/tmp/spdk2.sock 00:07:45.502 08:49:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:45.502 08:49:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61624 ']' 00:07:45.502 08:49:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:45.502 08:49:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:45.502 08:49:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:45.502 08:49:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.502 08:49:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.761 [2024-07-25 08:49:52.712097] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:45.761 [2024-07-25 08:49:52.712237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61624 ] 00:07:45.761 [2024-07-25 08:49:52.869469] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:45.761 [2024-07-25 08:49:52.869527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.329 [2024-07-25 08:49:53.368437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.238 08:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.238 08:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:48.238 08:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61603 00:07:48.238 08:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61603 00:07:48.238 08:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:49.174 08:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61603 00:07:49.174 08:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61603 ']' 00:07:49.174 08:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61603 00:07:49.174 08:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:49.174 08:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.174 08:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61603 00:07:49.174 08:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.174 08:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.174 killing process with pid 61603 00:07:49.174 08:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61603' 00:07:49.174 08:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61603 00:07:49.174 08:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61603 00:07:54.507 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61624 00:07:54.507 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61624 ']' 00:07:54.507 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61624 00:07:54.507 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:54.507 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.507 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61624 00:07:54.507 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:54.507 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:54.507 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61624' 00:07:54.507 killing process with pid 61624 00:07:54.507 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61624 00:07:54.507 08:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61624 00:07:57.791 00:07:57.791 real 0m13.147s 00:07:57.791 user 0m13.395s 00:07:57.791 sys 0m1.357s 00:07:57.791 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.791 08:50:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.791 ************************************ 00:07:57.791 END TEST non_locking_app_on_locked_coremask 00:07:57.791 ************************************ 00:07:57.791 08:50:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:57.791 08:50:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.791 08:50:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.791 08:50:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.791 ************************************ 00:07:57.791 START TEST locking_app_on_unlocked_coremask 00:07:57.791 ************************************ 00:07:57.791 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:57.791 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:57.791 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61789 00:07:57.791 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61789 /var/tmp/spdk.sock 00:07:57.791 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61789 ']' 00:07:57.791 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.791 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.791 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.791 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.791 08:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.791 [2024-07-25 08:50:04.404587] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:57.791 [2024-07-25 08:50:04.404727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61789 ] 00:07:57.791 [2024-07-25 08:50:04.570566] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:57.791 [2024-07-25 08:50:04.570654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.791 [2024-07-25 08:50:04.833774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.169 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.169 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:59.169 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61805 00:07:59.169 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61805 /var/tmp/spdk2.sock 00:07:59.169 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61805 ']' 00:07:59.169 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:59.169 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.169 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:59.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:59.169 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.169 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:59.169 08:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.169 [2024-07-25 08:50:05.985960] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:59.169 [2024-07-25 08:50:05.986118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61805 ] 00:07:59.169 [2024-07-25 08:50:06.149433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.736 [2024-07-25 08:50:06.690134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.637 08:50:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.637 08:50:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:01.637 08:50:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61805 00:08:01.637 08:50:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61805 00:08:01.637 08:50:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:02.621 08:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61789 00:08:02.621 08:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61789 ']' 00:08:02.621 08:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 61789 00:08:02.621 08:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:02.621 08:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.622 08:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61789 00:08:02.622 killing process with pid 61789 00:08:02.622 08:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.622 08:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.622 08:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61789' 00:08:02.622 08:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 61789 00:08:02.622 08:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 61789 00:08:09.180 08:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61805 00:08:09.180 08:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61805 ']' 00:08:09.180 08:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 61805 00:08:09.180 08:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:09.180 08:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.180 08:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61805 00:08:09.180 killing process with pid 61805 00:08:09.180 08:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.180 08:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.180 08:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61805' 00:08:09.180 08:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 61805 00:08:09.180 08:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 61805 00:08:11.168 ************************************ 00:08:11.168 END TEST locking_app_on_unlocked_coremask 00:08:11.168 ************************************ 00:08:11.168 00:08:11.168 real 0m13.891s 00:08:11.168 user 0m14.138s 00:08:11.168 sys 0m1.337s 00:08:11.168 08:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.168 08:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:11.168 08:50:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:11.168 08:50:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:11.168 08:50:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.168 08:50:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:11.168 ************************************ 00:08:11.168 START TEST locking_app_on_locked_coremask 00:08:11.168 ************************************ 00:08:11.168 08:50:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:11.169 08:50:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61975 00:08:11.169 08:50:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61975 /var/tmp/spdk.sock 00:08:11.169 08:50:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61975 ']' 00:08:11.169 08:50:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.169 08:50:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:11.169 08:50:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.169 08:50:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.169 08:50:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.169 08:50:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:11.426 [2024-07-25 08:50:18.342667] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:11.426 [2024-07-25 08:50:18.343305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61975 ] 00:08:11.426 [2024-07-25 08:50:18.511938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.684 [2024-07-25 08:50:18.796300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61997 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61997 /var/tmp/spdk2.sock 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61997 /var/tmp/spdk2.sock 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61997 /var/tmp/spdk2.sock 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61997 ']' 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.164 08:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:13.164 [2024-07-25 08:50:19.988342] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:13.164 [2024-07-25 08:50:19.988491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61997 ] 00:08:13.164 [2024-07-25 08:50:20.152789] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61975 has claimed it. 00:08:13.164 [2024-07-25 08:50:20.152875] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:13.733 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61997) - No such process 00:08:13.733 ERROR: process (pid: 61997) is no longer running 00:08:13.733 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.733 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:13.733 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:13.733 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:13.733 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:13.733 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:13.733 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61975 00:08:13.734 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61975 00:08:13.734 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:13.993 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61975 00:08:13.993 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61975 ']' 00:08:13.993 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61975 00:08:13.993 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:13.993 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.993 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61975 00:08:13.993 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.993 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.993 killing process with pid 61975 00:08:13.993 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61975' 00:08:13.993 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61975 00:08:13.993 08:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61975 00:08:17.285 00:08:17.285 real 0m5.721s 00:08:17.285 user 0m5.922s 00:08:17.285 sys 0m0.805s 00:08:17.285 08:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.285 08:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.285 ************************************ 00:08:17.285 END TEST locking_app_on_locked_coremask 00:08:17.285 ************************************ 00:08:17.285 08:50:23 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:17.285 08:50:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:17.285 08:50:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.285 08:50:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:17.285 ************************************ 00:08:17.285 START TEST locking_overlapped_coremask 00:08:17.285 ************************************ 00:08:17.285 08:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:17.285 08:50:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62072 00:08:17.285 08:50:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:17.285 08:50:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 62072 /var/tmp/spdk.sock 00:08:17.285 08:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 62072 ']' 00:08:17.285 08:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.285 08:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.285 08:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.285 08:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.285 08:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.285 [2024-07-25 08:50:24.124343] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:17.285 [2024-07-25 08:50:24.124492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62072 ] 00:08:17.285 [2024-07-25 08:50:24.292705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:17.545 [2024-07-25 08:50:24.574698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.545 [2024-07-25 08:50:24.574750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.545 [2024-07-25 08:50:24.574751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.480 08:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.480 08:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:18.480 08:50:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62101 00:08:18.480 08:50:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62101 /var/tmp/spdk2.sock 00:08:18.480 08:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:18.480 08:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 62101 /var/tmp/spdk2.sock 00:08:18.480 08:50:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:18.480 08:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:18.739 08:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.739 08:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:18.739 08:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.739 08:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 62101 /var/tmp/spdk2.sock 00:08:18.740 08:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 62101 ']' 00:08:18.740 08:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:18.740 08:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:18.740 08:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:18.740 08:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.740 08:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.740 [2024-07-25 08:50:25.725531] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:18.740 [2024-07-25 08:50:25.725675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62101 ] 00:08:18.999 [2024-07-25 08:50:25.886222] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62072 has claimed it. 00:08:18.999 [2024-07-25 08:50:25.886307] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:19.258 ERROR: process (pid: 62101) is no longer running 00:08:19.258 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (62101) - No such process 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 62072 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 62072 ']' 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 62072 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62072 00:08:19.258 killing process with pid 62072 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.258 08:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62072' 00:08:19.259 08:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 62072 00:08:19.259 08:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 62072 00:08:22.568 00:08:22.568 real 0m5.341s 00:08:22.568 user 0m13.849s 00:08:22.568 sys 0m0.619s 00:08:22.568 08:50:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.568 08:50:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:22.568 ************************************ 00:08:22.568 END TEST locking_overlapped_coremask 00:08:22.568 ************************************ 00:08:22.568 08:50:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:22.568 08:50:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.568 08:50:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.568 08:50:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:22.568 ************************************ 00:08:22.568 START TEST locking_overlapped_coremask_via_rpc 00:08:22.568 ************************************ 00:08:22.568 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:22.568 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62165 00:08:22.568 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 62165 /var/tmp/spdk.sock 00:08:22.568 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:22.568 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 62165 ']' 00:08:22.568 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.568 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.568 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.568 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.568 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.568 [2024-07-25 08:50:29.521719] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:22.568 [2024-07-25 08:50:29.521895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62165 ] 00:08:22.827 [2024-07-25 08:50:29.688395] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:22.827 [2024-07-25 08:50:29.688494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:23.086 [2024-07-25 08:50:29.975357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.086 [2024-07-25 08:50:29.975453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.086 [2024-07-25 08:50:29.975495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.022 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.022 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:24.022 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62194 00:08:24.022 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 62194 /var/tmp/spdk2.sock 00:08:24.022 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:24.022 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 62194 ']' 00:08:24.022 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:24.022 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:24.022 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:24.022 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.022 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.279 [2024-07-25 08:50:31.190955] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:24.279 [2024-07-25 08:50:31.191110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62194 ] 00:08:24.279 [2024-07-25 08:50:31.354439] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:24.279 [2024-07-25 08:50:31.354522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:24.843 [2024-07-25 08:50:31.929343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.843 [2024-07-25 08:50:31.929478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.843 [2024-07-25 08:50:31.929513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:27.365 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.365 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:27.365 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:27.365 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.365 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.365 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.365 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:27.365 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:27.365 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:27.365 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.366 [2024-07-25 08:50:34.060525] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62165 has claimed it. 00:08:27.366 request: 00:08:27.366 { 00:08:27.366 "method": "framework_enable_cpumask_locks", 00:08:27.366 "req_id": 1 00:08:27.366 } 00:08:27.366 Got JSON-RPC error response 00:08:27.366 response: 00:08:27.366 { 00:08:27.366 "code": -32603, 00:08:27.366 "message": "Failed to claim CPU core: 2" 00:08:27.366 } 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 62165 /var/tmp/spdk.sock 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 62165 ']' 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 62194 /var/tmp/spdk2.sock 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 62194 ']' 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.366 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.623 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.623 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:27.623 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:27.623 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:27.623 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:27.623 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:27.623 00:08:27.623 real 0m5.161s 00:08:27.623 user 0m1.467s 00:08:27.623 sys 0m0.196s 00:08:27.623 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.623 08:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.623 ************************************ 00:08:27.623 END TEST locking_overlapped_coremask_via_rpc 00:08:27.623 ************************************ 00:08:27.623 08:50:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:27.623 08:50:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62165 ]] 00:08:27.623 08:50:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62165 00:08:27.623 08:50:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 62165 ']' 00:08:27.623 08:50:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 62165 00:08:27.623 08:50:34 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:27.623 08:50:34 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.623 08:50:34 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62165 00:08:27.623 08:50:34 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.623 killing process with pid 62165 00:08:27.623 08:50:34 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.623 08:50:34 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62165' 00:08:27.623 08:50:34 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 62165 00:08:27.623 08:50:34 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 62165 00:08:30.940 08:50:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62194 ]] 00:08:30.940 08:50:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62194 00:08:30.940 08:50:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 62194 ']' 00:08:30.941 08:50:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 62194 00:08:30.941 08:50:37 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:30.941 08:50:37 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.941 08:50:37 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62194 00:08:30.941 08:50:37 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:30.941 08:50:37 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:30.941 08:50:37 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62194' 00:08:30.941 killing process with pid 62194 00:08:30.941 08:50:37 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 62194 00:08:30.941 08:50:37 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 62194 00:08:34.223 08:50:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:34.223 08:50:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:34.223 08:50:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62165 ]] 00:08:34.223 08:50:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62165 00:08:34.223 08:50:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 62165 ']' 00:08:34.223 08:50:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 62165 00:08:34.223 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (62165) - No such process 00:08:34.223 08:50:40 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 62165 is not found' 00:08:34.223 Process with pid 62165 is not found 00:08:34.223 08:50:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62194 ]] 00:08:34.223 08:50:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62194 00:08:34.223 08:50:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 62194 ']' 00:08:34.223 08:50:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 62194 00:08:34.223 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (62194) - No such process 00:08:34.223 Process with pid 62194 is not found 00:08:34.223 08:50:40 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 62194 is not found' 00:08:34.223 08:50:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:34.223 00:08:34.223 real 0m59.626s 00:08:34.223 user 1m40.150s 00:08:34.223 sys 0m6.806s 00:08:34.223 08:50:40 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.223 08:50:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:34.223 ************************************ 00:08:34.223 END TEST cpu_locks 00:08:34.223 ************************************ 00:08:34.223 ************************************ 00:08:34.223 END TEST event 00:08:34.223 ************************************ 00:08:34.223 00:08:34.223 real 1m31.351s 00:08:34.223 user 2m38.549s 00:08:34.223 sys 0m10.511s 00:08:34.223 08:50:40 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.223 08:50:40 event -- common/autotest_common.sh@10 -- # set +x 00:08:34.223 08:50:40 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:34.223 08:50:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.223 08:50:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.223 08:50:40 -- common/autotest_common.sh@10 -- # set +x 00:08:34.223 ************************************ 00:08:34.223 START TEST thread 00:08:34.223 ************************************ 00:08:34.224 08:50:40 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:34.224 * Looking for test storage... 00:08:34.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:34.224 08:50:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:34.224 08:50:40 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:34.224 08:50:40 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.224 08:50:40 thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.224 ************************************ 00:08:34.224 START TEST thread_poller_perf 00:08:34.224 ************************************ 00:08:34.224 08:50:40 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:34.224 [2024-07-25 08:50:40.907248] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:34.224 [2024-07-25 08:50:40.907443] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62397 ] 00:08:34.224 [2024-07-25 08:50:41.085740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.482 [2024-07-25 08:50:41.355198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.482 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:35.856 ====================================== 00:08:35.856 busy:2302295290 (cyc) 00:08:35.856 total_run_count: 320000 00:08:35.856 tsc_hz: 2290000000 (cyc) 00:08:35.856 ====================================== 00:08:35.856 poller_cost: 7194 (cyc), 3141 (nsec) 00:08:35.856 00:08:35.856 real 0m2.004s 00:08:35.856 user 0m1.771s 00:08:35.856 sys 0m0.122s 00:08:35.856 08:50:42 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.856 08:50:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:35.856 ************************************ 00:08:35.856 END TEST thread_poller_perf 00:08:35.856 ************************************ 00:08:35.856 08:50:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:35.856 08:50:42 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:35.856 08:50:42 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.856 08:50:42 thread -- common/autotest_common.sh@10 -- # set +x 00:08:35.856 ************************************ 00:08:35.856 START TEST thread_poller_perf 00:08:35.856 ************************************ 00:08:35.856 08:50:42 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:35.856 [2024-07-25 08:50:42.968686] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:35.856 [2024-07-25 08:50:42.968888] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62440 ] 00:08:36.113 [2024-07-25 08:50:43.141370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.376 [2024-07-25 08:50:43.407157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.377 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:38.273 ====================================== 00:08:38.273 busy:2294286202 (cyc) 00:08:38.274 total_run_count: 4248000 00:08:38.274 tsc_hz: 2290000000 (cyc) 00:08:38.274 ====================================== 00:08:38.274 poller_cost: 540 (cyc), 235 (nsec) 00:08:38.274 00:08:38.274 real 0m1.983s 00:08:38.274 user 0m1.762s 00:08:38.274 sys 0m0.112s 00:08:38.274 08:50:44 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.274 08:50:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:38.274 ************************************ 00:08:38.274 END TEST thread_poller_perf 00:08:38.274 ************************************ 00:08:38.274 08:50:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:38.274 00:08:38.274 real 0m4.208s 00:08:38.274 user 0m3.620s 00:08:38.274 sys 0m0.377s 00:08:38.274 08:50:44 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.274 08:50:44 thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.274 ************************************ 00:08:38.274 END TEST thread 00:08:38.274 ************************************ 00:08:38.274 08:50:45 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:08:38.274 08:50:45 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:38.274 08:50:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.274 08:50:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.274 08:50:45 -- common/autotest_common.sh@10 -- # set +x 00:08:38.274 ************************************ 00:08:38.274 START TEST app_cmdline 00:08:38.274 ************************************ 00:08:38.274 08:50:45 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:38.274 * Looking for test storage... 00:08:38.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:38.274 08:50:45 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:38.274 08:50:45 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62521 00:08:38.274 08:50:45 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:38.274 08:50:45 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62521 00:08:38.274 08:50:45 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 62521 ']' 00:08:38.274 08:50:45 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.274 08:50:45 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.274 08:50:45 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.274 08:50:45 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.274 08:50:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:38.274 [2024-07-25 08:50:45.260443] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:38.274 [2024-07-25 08:50:45.260595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62521 ] 00:08:38.531 [2024-07-25 08:50:45.428518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.789 [2024-07-25 08:50:45.703247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.775 08:50:46 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.775 08:50:46 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:39.775 08:50:46 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:40.033 { 00:08:40.033 "version": "SPDK v24.09-pre git sha1 704257090", 00:08:40.033 "fields": { 00:08:40.033 "major": 24, 00:08:40.033 "minor": 9, 00:08:40.033 "patch": 0, 00:08:40.033 "suffix": "-pre", 00:08:40.033 "commit": "704257090" 00:08:40.033 } 00:08:40.033 } 00:08:40.033 08:50:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:40.033 08:50:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:40.033 08:50:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:40.033 08:50:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:40.033 08:50:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:40.033 08:50:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:40.034 08:50:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:40.034 08:50:46 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.034 08:50:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:40.034 08:50:46 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.034 08:50:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:40.034 08:50:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:40.034 08:50:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:40.034 08:50:47 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:40.034 08:50:47 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:40.034 08:50:47 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:40.034 08:50:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.034 08:50:47 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:40.034 08:50:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.034 08:50:47 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:40.034 08:50:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.034 08:50:47 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:40.034 08:50:47 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:40.034 08:50:47 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:40.292 request: 00:08:40.292 { 00:08:40.292 "method": "env_dpdk_get_mem_stats", 00:08:40.292 "req_id": 1 00:08:40.292 } 00:08:40.292 Got JSON-RPC error response 00:08:40.292 response: 00:08:40.292 { 00:08:40.292 "code": -32601, 00:08:40.292 "message": "Method not found" 00:08:40.292 } 00:08:40.292 08:50:47 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:40.292 08:50:47 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:40.292 08:50:47 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:40.292 08:50:47 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:40.292 08:50:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62521 00:08:40.292 08:50:47 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 62521 ']' 00:08:40.292 08:50:47 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 62521 00:08:40.292 08:50:47 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:40.292 08:50:47 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:40.292 08:50:47 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62521 00:08:40.292 08:50:47 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:40.292 08:50:47 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:40.292 killing process with pid 62521 00:08:40.292 08:50:47 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62521' 00:08:40.292 08:50:47 app_cmdline -- common/autotest_common.sh@969 -- # kill 62521 00:08:40.292 08:50:47 app_cmdline -- common/autotest_common.sh@974 -- # wait 62521 00:08:43.574 00:08:43.574 real 0m5.181s 00:08:43.574 user 0m5.429s 00:08:43.574 sys 0m0.587s 00:08:43.574 08:50:50 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.574 08:50:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:43.574 ************************************ 00:08:43.574 END TEST app_cmdline 00:08:43.574 ************************************ 00:08:43.574 08:50:50 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:43.574 08:50:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.574 08:50:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.574 08:50:50 -- common/autotest_common.sh@10 -- # set +x 00:08:43.574 ************************************ 00:08:43.574 START TEST version 00:08:43.574 ************************************ 00:08:43.574 08:50:50 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:43.574 * Looking for test storage... 00:08:43.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:43.574 08:50:50 version -- app/version.sh@17 -- # get_header_version major 00:08:43.574 08:50:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:43.574 08:50:50 version -- app/version.sh@14 -- # tr -d '"' 00:08:43.574 08:50:50 version -- app/version.sh@14 -- # cut -f2 00:08:43.574 08:50:50 version -- app/version.sh@17 -- # major=24 00:08:43.574 08:50:50 version -- app/version.sh@18 -- # get_header_version minor 00:08:43.574 08:50:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:43.574 08:50:50 version -- app/version.sh@14 -- # cut -f2 00:08:43.574 08:50:50 version -- app/version.sh@14 -- # tr -d '"' 00:08:43.574 08:50:50 version -- app/version.sh@18 -- # minor=9 00:08:43.574 08:50:50 version -- app/version.sh@19 -- # get_header_version patch 00:08:43.574 08:50:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:43.574 08:50:50 version -- app/version.sh@14 -- # tr -d '"' 00:08:43.574 08:50:50 version -- app/version.sh@14 -- # cut -f2 00:08:43.574 08:50:50 version -- app/version.sh@19 -- # patch=0 00:08:43.574 08:50:50 version -- app/version.sh@20 -- # get_header_version suffix 00:08:43.574 08:50:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:43.574 08:50:50 version -- app/version.sh@14 -- # tr -d '"' 00:08:43.574 08:50:50 version -- app/version.sh@14 -- # cut -f2 00:08:43.574 08:50:50 version -- app/version.sh@20 -- # suffix=-pre 00:08:43.574 08:50:50 version -- app/version.sh@22 -- # version=24.9 00:08:43.574 08:50:50 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:43.574 08:50:50 version -- app/version.sh@28 -- # version=24.9rc0 00:08:43.574 08:50:50 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:43.574 08:50:50 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:43.574 08:50:50 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:43.574 08:50:50 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:43.574 00:08:43.574 real 0m0.184s 00:08:43.574 user 0m0.097s 00:08:43.574 sys 0m0.117s 00:08:43.574 08:50:50 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.574 08:50:50 version -- common/autotest_common.sh@10 -- # set +x 00:08:43.574 ************************************ 00:08:43.574 END TEST version 00:08:43.574 ************************************ 00:08:43.574 08:50:50 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:08:43.574 08:50:50 -- spdk/autotest.sh@202 -- # uname -s 00:08:43.574 08:50:50 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:08:43.574 08:50:50 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:43.574 08:50:50 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:43.574 08:50:50 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:08:43.574 08:50:50 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:08:43.574 08:50:50 -- spdk/autotest.sh@264 -- # timing_exit lib 00:08:43.574 08:50:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:43.574 08:50:50 -- common/autotest_common.sh@10 -- # set +x 00:08:43.574 08:50:50 -- spdk/autotest.sh@266 -- # '[' 1 -eq 1 ']' 00:08:43.574 08:50:50 -- spdk/autotest.sh@267 -- # run_test iscsi_tgt /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:08:43.574 08:50:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.574 08:50:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.574 08:50:50 -- common/autotest_common.sh@10 -- # set +x 00:08:43.574 ************************************ 00:08:43.574 START TEST iscsi_tgt 00:08:43.574 ************************************ 00:08:43.574 08:50:50 iscsi_tgt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:08:43.574 * Looking for test storage... 00:08:43.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt 00:08:43.574 08:50:50 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # uname -s 00:08:43.574 08:50:50 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:43.574 08:50:50 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:43.574 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:43.574 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:43.574 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:43.574 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:43.574 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:43.574 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:43.574 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:43.574 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:43.574 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@18 -- # iscsicleanup 00:08:43.575 08:50:50 iscsi_tgt -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:08:43.575 Cleaning up iSCSI connection 00:08:43.575 08:50:50 iscsi_tgt -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:08:43.575 iscsiadm: No matching sessions found 00:08:43.575 08:50:50 iscsi_tgt -- common/autotest_common.sh@983 -- # true 00:08:43.575 08:50:50 iscsi_tgt -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:08:43.575 iscsiadm: No records found 00:08:43.575 08:50:50 iscsi_tgt -- common/autotest_common.sh@984 -- # true 00:08:43.575 08:50:50 iscsi_tgt -- common/autotest_common.sh@985 -- # rm -rf 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@21 -- # create_veth_interfaces 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # ip link set init_br nomaster 00:08:43.575 Cannot find device "init_br" 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # true 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # ip link set tgt_br nomaster 00:08:43.575 Cannot find device "tgt_br" 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # true 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # ip link set tgt_br2 nomaster 00:08:43.575 Cannot find device "tgt_br2" 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # true 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # ip link set init_br down 00:08:43.575 Cannot find device "init_br" 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # true 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # ip link set tgt_br down 00:08:43.575 Cannot find device "tgt_br" 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # true 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # ip link set tgt_br2 down 00:08:43.575 Cannot find device "tgt_br2" 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # true 00:08:43.575 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # ip link delete iscsi_br type bridge 00:08:43.833 Cannot find device "iscsi_br" 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # true 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # ip link delete spdk_init_int 00:08:43.833 Cannot find device "spdk_init_int" 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # true 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:08:43.833 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # true 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:08:43.833 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # true 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # ip netns del spdk_iscsi_ns 00:08:43.833 Cannot remove namespace file "/var/run/netns/spdk_iscsi_ns": No such file or directory 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # true 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@44 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@47 -- # ip netns add spdk_iscsi_ns 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@50 -- # ip link add spdk_init_int type veth peer name init_br 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@51 -- # ip link add spdk_tgt_int type veth peer name tgt_br 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@52 -- # ip link add spdk_tgt_int2 type veth peer name tgt_br2 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@55 -- # ip link set spdk_tgt_int netns spdk_iscsi_ns 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@56 -- # ip link set spdk_tgt_int2 netns spdk_iscsi_ns 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@59 -- # ip addr add 10.0.0.2/24 dev spdk_init_int 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@60 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.1/24 dev spdk_tgt_int 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@61 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.3/24 dev spdk_tgt_int2 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@64 -- # ip link set spdk_init_int up 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@65 -- # ip link set init_br up 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@66 -- # ip link set tgt_br up 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@67 -- # ip link set tgt_br2 up 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@68 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int up 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@69 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int2 up 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@70 -- # ip netns exec spdk_iscsi_ns ip link set lo up 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@73 -- # ip link add iscsi_br type bridge 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@74 -- # ip link set iscsi_br up 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@77 -- # ip link set init_br master iscsi_br 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@78 -- # ip link set tgt_br master iscsi_br 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@79 -- # ip link set tgt_br2 master iscsi_br 00:08:43.833 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@82 -- # iptables -I INPUT 1 -i spdk_init_int -p tcp --dport 3260 -j ACCEPT 00:08:44.091 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@83 -- # iptables -A FORWARD -i iscsi_br -o iscsi_br -j ACCEPT 00:08:44.091 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@86 -- # ping -c 1 10.0.0.1 00:08:44.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:08:44.091 00:08:44.091 --- 10.0.0.1 ping statistics --- 00:08:44.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.091 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:08:44.091 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@87 -- # ping -c 1 10.0.0.3 00:08:44.091 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:44.091 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:08:44.091 00:08:44.091 --- 10.0.0.3 ping statistics --- 00:08:44.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.091 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:44.091 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@88 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:08:44.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:08:44.091 00:08:44.091 --- 10.0.0.2 ping statistics --- 00:08:44.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.091 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:44.091 08:50:50 iscsi_tgt -- iscsi_tgt/common.sh@89 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:08:44.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:08:44.091 00:08:44.091 --- 10.0.0.2 ping statistics --- 00:08:44.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.091 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:44.091 08:50:50 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@23 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:08:44.091 08:50:50 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@25 -- # run_test iscsi_tgt_sock /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:08:44.091 08:50:50 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:44.091 08:50:50 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.091 08:50:50 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:08:44.091 ************************************ 00:08:44.091 START TEST iscsi_tgt_sock 00:08:44.091 ************************************ 00:08:44.091 08:50:50 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:08:44.091 * Looking for test storage... 00:08:44.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@48 -- # iscsitestinit 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@50 -- # HELLO_SOCK_APP='ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/examples/hello_sock' 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@51 -- # SOCAT_APP=socat 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@52 -- # OPENSSL_APP=openssl 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@53 -- # PSK='-N ssl -E 1234567890ABCDEF -I psk.spdk.io' 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@58 -- # timing_enter sock_client 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:08:44.092 Testing client path 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@59 -- # echo 'Testing client path' 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@63 -- # server_pid=62871 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@62 -- # socat tcp-l:3260,fork,bind=10.0.0.2 exec:/bin/cat 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@64 -- # trap 'killprocess $server_pid;iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@66 -- # waitfortcp 62871 10.0.0.2:3260 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@25 -- # local addr=10.0.0.2:3260 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@27 -- # echo 'Waiting for process to start up and listen on address 10.0.0.2:3260...' 00:08:44.092 Waiting for process to start up and listen on address 10.0.0.2:3260... 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@29 -- # xtrace_disable 00:08:44.092 08:50:51 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:08:44.763 [2024-07-25 08:50:51.671207] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:44.763 [2024-07-25 08:50:51.671328] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62876 ] 00:08:44.763 [2024-07-25 08:50:51.819646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.021 [2024-07-25 08:50:52.107400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.021 [2024-07-25 08:50:52.107496] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:45.021 [2024-07-25 08:50:52.107529] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:08:45.021 [2024-07-25 08:50:52.107693] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 55754) 00:08:45.021 [2024-07-25 08:50:52.107789] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:46.396 [2024-07-25 08:50:53.105901] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:46.396 [2024-07-25 08:50:53.106119] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:46.654 [2024-07-25 08:50:53.631012] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:46.654 [2024-07-25 08:50:53.631144] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62907 ] 00:08:46.912 [2024-07-25 08:50:53.798984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.171 [2024-07-25 08:50:54.082930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.171 [2024-07-25 08:50:54.083030] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:47.171 [2024-07-25 08:50:54.083064] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:08:47.171 [2024-07-25 08:50:54.083230] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 55756) 00:08:47.171 [2024-07-25 08:50:54.083329] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:48.151 [2024-07-25 08:50:55.081441] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:48.151 [2024-07-25 08:50:55.081702] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:48.720 [2024-07-25 08:50:55.642834] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:48.720 [2024-07-25 08:50:55.642969] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62938 ] 00:08:48.720 [2024-07-25 08:50:55.800354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.978 [2024-07-25 08:50:56.067467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.978 [2024-07-25 08:50:56.067562] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:48.978 [2024-07-25 08:50:56.067594] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:08:48.978 [2024-07-25 08:50:56.067897] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 55772) 00:08:48.978 [2024-07-25 08:50:56.067995] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:50.357 [2024-07-25 08:50:57.066105] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:50.357 [2024-07-25 08:50:57.066343] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:50.617 killing process with pid 62871 00:08:50.617 Testing SSL server path 00:08:50.617 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:08:50.617 [2024-07-25 08:50:57.701561] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:50.617 [2024-07-25 08:50:57.701687] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62988 ] 00:08:50.877 [2024-07-25 08:50:57.871399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.135 [2024-07-25 08:50:58.139578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.135 [2024-07-25 08:50:58.139673] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:51.135 [2024-07-25 08:50:58.139762] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(ssl) 00:08:51.135 [2024-07-25 08:50:58.216482] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:51.135 [2024-07-25 08:50:58.216623] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62993 ] 00:08:51.393 [2024-07-25 08:50:58.368223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.651 [2024-07-25 08:50:58.641935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.651 [2024-07-25 08:50:58.642035] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:51.651 [2024-07-25 08:50:58.642079] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:08:51.651 [2024-07-25 08:50:58.647954] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.[2024-07-25 08:50:58.647954] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 37320.1, 37326) 00:08:51.651 6) to (10.0.0.1, 3260) 00:08:51.651 [2024-07-25 08:50:58.651716] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:52.617 [2024-07-25 08:50:59.649872] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:52.617 [2024-07-25 08:50:59.650080] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:52.617 [2024-07-25 08:50:59.650261] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:53.186 [2024-07-25 08:51:00.211066] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:53.186 [2024-07-25 08:51:00.211189] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63022 ] 00:08:53.444 [2024-07-25 08:51:00.376725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.703 [2024-07-25 08:51:00.652576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.703 [2024-07-25 08:51:00.652676] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:53.703 [2024-07-25 08:51:00.652721] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:08:53.703 [2024-07-25 08:51:00.655004] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 52796) to (10.0.0.1, 3260) 00:08:53.703 [2024-07-25 08:51:00.658863] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 52796) 00:08:53.703 [2024-07-25 08:51:00.661919] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:54.638 [2024-07-25 08:51:01.660067] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:54.638 [2024-07-25 08:51:01.660301] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:54.638 [2024-07-25 08:51:01.660455] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:55.204 [2024-07-25 08:51:02.231205] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:55.204 [2024-07-25 08:51:02.231356] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63051 ] 00:08:55.463 [2024-07-25 08:51:02.400480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.722 [2024-07-25 08:51:02.668253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.722 [2024-07-25 08:51:02.668455] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:55.722 [2024-07-25 08:51:02.668523] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:08:55.722 [2024-07-25 08:51:02.670048] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 52806) to (10.0.0.1, 3260) 00:08:55.722 [2024-07-25 08:51:02.674215] posix.c: 755:posix_sock_create_ssl_context: *ERROR*: Incorrect TLS version provided: 7 00:08:55.722 [2024-07-25 08:51:02.674413] posix.c:1033:posix_sock_create: *ERROR*: posix_sock_create_ssl_context() failed, errno = 2 00:08:55.722 [2024-07-25 08:51:02.674493] hello_sock.c: 309:hello_sock_connect: *ERROR*: connect error(2): No such file or directory 00:08:55.722 [2024-07-25 08:51:02.674525] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.722 [2024-07-25 08:51:02.674668] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:08:55.722 [2024-07-25 08:51:02.674684] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:55.722 [2024-07-25 08:51:02.674709] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:56.289 [2024-07-25 08:51:03.226637] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:56.289 [2024-07-25 08:51:03.226884] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63067 ] 00:08:56.289 [2024-07-25 08:51:03.394214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.855 [2024-07-25 08:51:03.684679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.855 [2024-07-25 08:51:03.684905] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:56.855 [2024-07-25 08:51:03.684973] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:08:56.855 [2024-07-25 08:51:03.687033] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 52810) to (10.0.0.1, 3260) 00:08:56.855 [2024-07-25 08:51:03.690776] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 52810) 00:08:56.855 [2024-07-25 08:51:03.693938] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:57.788 [2024-07-25 08:51:04.692153] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:57.788 [2024-07-25 08:51:04.692477] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:57.788 [2024-07-25 08:51:04.692636] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:58.357 SSL_connect:before SSL initialization 00:08:58.357 SSL_connect:SSLv3/TLS write client hello 00:08:58.357 [2024-07-25 08:51:05.267029] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 53950) to (10.0.0.1, 3260) 00:08:58.357 SSL_connect:SSLv3/TLS write client hello 00:08:58.357 SSL_connect:SSLv3/TLS read server hello 00:08:58.357 Can't use SSL_get_servername 00:08:58.357 SSL_connect:TLSv1.3 read encrypted extensions 00:08:58.357 SSL_connect:SSLv3/TLS read finished 00:08:58.357 SSL_connect:SSLv3/TLS write change cipher spec 00:08:58.357 SSL_connect:SSLv3/TLS write finished 00:08:58.357 SSL_connect:SSL negotiation finished successfully 00:08:58.357 SSL_connect:SSL negotiation finished successfully 00:08:58.357 SSL_connect:SSLv3/TLS read server session ticket 00:09:00.266 DONE 00:09:00.266 SSL3 alert write:warning:close notify 00:09:00.267 [2024-07-25 08:51:07.206919] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:09:00.267 [2024-07-25 08:51:07.280551] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:00.267 [2024-07-25 08:51:07.281100] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63123 ] 00:09:00.525 [2024-07-25 08:51:07.460469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.785 [2024-07-25 08:51:07.719937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.785 [2024-07-25 08:51:07.720108] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:09:00.785 [2024-07-25 08:51:07.720171] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:09:00.785 [2024-07-25 08:51:07.721472] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 52822) to (10.0.0.1, 3260) 00:09:00.785 [2024-07-25 08:51:07.725365] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 52822) 00:09:00.785 [2024-07-25 08:51:07.726703] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:09:00.785 [2024-07-25 08:51:07.726714] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:09:01.721 [2024-07-25 08:51:08.724781] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:09:01.721 [2024-07-25 08:51:08.725112] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:01.721 [2024-07-25 08:51:08.725189] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:09:01.721 [2024-07-25 08:51:08.725201] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:02.289 [2024-07-25 08:51:09.263369] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:02.289 [2024-07-25 08:51:09.263486] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63144 ] 00:09:02.549 [2024-07-25 08:51:09.433903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.807 [2024-07-25 08:51:09.678838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.807 [2024-07-25 08:51:09.679015] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:09:02.807 [2024-07-25 08:51:09.679074] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:09:02.807 [2024-07-25 08:51:09.680401] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 52826) to (10.0.0.1, 3260) 00:09:02.807 [2024-07-25 08:51:09.684115] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 52826) 00:09:02.807 [2024-07-25 08:51:09.685217] posix.c: 586:posix_sock_psk_find_session_server_cb: *ERROR*: Unknown Client's PSK ID 00:09:02.807 [2024-07-25 08:51:09.685353] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:09:02.807 [2024-07-25 08:51:09.685355] hello_sock.c: 240:hello_sock_writev_poll: *ERROR*: Write to socket failed. Closing connection... 00:09:02.807 [2024-07-25 08:51:09.685407] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:09:03.742 [2024-07-25 08:51:10.683471] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:09:03.742 [2024-07-25 08:51:10.683852] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:03.742 [2024-07-25 08:51:10.683960] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:09:03.742 [2024-07-25 08:51:10.684002] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:04.308 killing process with pid 62988 00:09:05.245 [2024-07-25 08:51:12.172970] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:09:05.245 [2024-07-25 08:51:12.173326] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:05.811 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:09:05.811 [2024-07-25 08:51:12.734992] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:05.811 [2024-07-25 08:51:12.735183] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63213 ] 00:09:05.811 [2024-07-25 08:51:12.897940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.069 [2024-07-25 08:51:13.155365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.069 [2024-07-25 08:51:13.155549] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:09:06.069 [2024-07-25 08:51:13.155669] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(posix) 00:09:06.328 [2024-07-25 08:51:13.205774] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 44464) to (10.0.0.1, 3260) 00:09:06.328 [2024-07-25 08:51:13.206033] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:09:06.328 killing process with pid 63213 00:09:07.263 [2024-07-25 08:51:14.228801] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:09:07.263 [2024-07-25 08:51:14.229171] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:07.829 ************************************ 00:09:07.829 END TEST iscsi_tgt_sock 00:09:07.829 ************************************ 00:09:07.829 00:09:07.829 real 0m23.789s 00:09:07.829 user 0m30.557s 00:09:07.829 sys 0m2.561s 00:09:07.829 08:51:14 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.829 08:51:14 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:09:07.829 08:51:14 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@26 -- # [[ -d /usr/local/calsoft ]] 00:09:07.829 08:51:14 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@27 -- # run_test iscsi_tgt_calsoft /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:09:07.829 08:51:14 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:07.829 08:51:14 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.829 08:51:14 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:09:07.829 ************************************ 00:09:07.829 START TEST iscsi_tgt_calsoft 00:09:07.829 ************************************ 00:09:07.829 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:09:08.088 * Looking for test storage... 00:09:08.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@15 -- # MALLOC_BDEV_SIZE=64 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@16 -- # MALLOC_BLOCK_SIZE=512 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@18 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@19 -- # calsoft_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@22 -- # mkdir -p /usr/local/etc 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@23 -- # cp /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/its.conf /usr/local/etc/ 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@26 -- # echo IP=10.0.0.1 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@28 -- # timing_enter start_iscsi_tgt 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:09:08.088 Process pid: 63306 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@30 -- # iscsitestinit 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@33 -- # pid=63306 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@34 -- # echo 'Process pid: 63306' 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@32 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x1 --wait-for-rpc 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@36 -- # trap 'killprocess $pid; delete_tmp_conf_files; iscsitestfini; exit 1 ' SIGINT SIGTERM EXIT 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@38 -- # waitforlisten 63306 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@831 -- # '[' -z 63306 ']' 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.088 08:51:14 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:09:08.088 [2024-07-25 08:51:15.122393] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:08.088 [2024-07-25 08:51:15.122642] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63306 ] 00:09:08.348 [2024-07-25 08:51:15.288951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.606 [2024-07-25 08:51:15.570889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.173 08:51:15 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.173 08:51:15 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@864 -- # return 0 00:09:09.173 08:51:15 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:09:09.173 08:51:16 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:09:10.551 08:51:17 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@41 -- # echo 'iscsi_tgt is listening. Running tests...' 00:09:10.551 iscsi_tgt is listening. Running tests... 00:09:10.551 08:51:17 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@43 -- # timing_exit start_iscsi_tgt 00:09:10.551 08:51:17 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:10.551 08:51:17 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:09:10.551 08:51:17 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_auth_group 1 -c 'user:root secret:tester' 00:09:10.809 08:51:17 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_discovery_auth -g 1 00:09:11.067 08:51:17 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:09:11.067 08:51:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:09:11.326 08:51:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create -b MyBdev 64 512 00:09:11.584 MyBdev 00:09:11.584 08:51:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias MyBdev:0 1:2 64 -g 1 00:09:11.843 08:51:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@55 -- # sleep 1 00:09:13.221 08:51:19 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@57 -- # '[' '' ']' 00:09:13.221 08:51:19 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py /home/vagrant/spdk_repo/spdk/../output 00:09:13.221 [2024-07-25 08:51:19.997906] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:09:13.221 [2024-07-25 08:51:20.038481] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:13.221 [2024-07-25 08:51:20.054698] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:13.221 [2024-07-25 08:51:20.094713] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:13.221 [2024-07-25 08:51:20.094960] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:13.221 [2024-07-25 08:51:20.114556] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:13.221 [2024-07-25 08:51:20.114727] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:13.221 [2024-07-25 08:51:20.136965] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:13.221 [2024-07-25 08:51:20.168660] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:13.221 [2024-07-25 08:51:20.168813] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:13.221 [2024-07-25 08:51:20.183790] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:13.221 [2024-07-25 08:51:20.227794] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:13.221 [2024-07-25 08:51:20.227858] iscsi.c:3961:iscsi_handle_recovery_datain: *ERROR*: Initiator requests BegRun: 0x00000000, RunLength:0x00001000 greater than maximum DataSN: 0x00000004. 00:09:13.221 [2024-07-25 08:51:20.227880] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=10) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:09:13.221 [2024-07-25 08:51:20.227890] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:09:13.221 [2024-07-25 08:51:20.247326] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:13.221 [2024-07-25 08:51:20.268798] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:13.221 [2024-07-25 08:51:20.288706] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:13.221 [2024-07-25 08:51:20.303708] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:13.480 [2024-07-25 08:51:20.350644] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:13.480 [2024-07-25 08:51:20.350920] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:13.480 [2024-07-25 08:51:20.452455] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:13.480 [2024-07-25 08:51:20.466997] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:13.480 [2024-07-25 08:51:20.467132] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:13.480 [2024-07-25 08:51:20.504638] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:09:13.480 [2024-07-25 08:51:20.567372] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:13.480 [2024-07-25 08:51:20.567666] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:13.480 [2024-07-25 08:51:20.588520] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:09:13.738 [2024-07-25 08:51:20.618749] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:13.738 [2024-07-25 08:51:20.618930] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:13.738 [2024-07-25 08:51:20.652888] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:13.738 [2024-07-25 08:51:20.673217] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:13.738 [2024-07-25 08:51:20.687788] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:09:13.738 [2024-07-25 08:51:20.687927] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:13.738 [2024-07-25 08:51:20.688051] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:09:13.738 [2024-07-25 08:51:20.688135] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=6, MaxCmdSN=67) 00:09:13.738 [2024-07-25 08:51:20.688698] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:09:13.738 [2024-07-25 08:51:20.702290] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:13.738 [2024-07-25 08:51:20.702585] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:13.738 [2024-07-25 08:51:20.724025] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:09:13.738 [2024-07-25 08:51:20.738453] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:13.738 [2024-07-25 08:51:20.738580] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:13.738 [2024-07-25 08:51:20.768926] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:13.738 [2024-07-25 08:51:20.784546] iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:09:13.738 [2024-07-25 08:51:20.784676] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:13.738 [2024-07-25 08:51:20.799981] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:13.738 [2024-07-25 08:51:20.800283] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:13.738 [2024-07-25 08:51:20.814184] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:13.738 [2024-07-25 08:51:20.814325] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:13.738 [2024-07-25 08:51:20.848520] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:09:13.738 PDU 00:09:13.738 00000000 01 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:09:13.738 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:09:13.738 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:09:13.738 [2024-07-25 08:51:20.848632] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:09:13.998 [2024-07-25 08:51:20.916795] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:13.998 [2024-07-25 08:51:20.962790] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:13.998 [2024-07-25 08:51:20.979261] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(2) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:09:13.998 [2024-07-25 08:51:20.979397] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:13.998 [2024-07-25 08:51:20.979464] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:13.998 [2024-07-25 08:51:21.036304] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:13.998 [2024-07-25 08:51:21.036873] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:13.998 [2024-07-25 08:51:21.080991] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:13.998 [2024-07-25 08:51:21.109765] iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:13.998 [2024-07-25 08:51:21.109839] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:09:13.998 [2024-07-25 08:51:21.109853] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:09:14.257 [2024-07-25 08:51:21.135361] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:14.257 [2024-07-25 08:51:21.135482] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:14.257 [2024-07-25 08:51:21.166843] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:14.257 [2024-07-25 08:51:21.167116] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:14.257 [2024-07-25 08:51:21.199602] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:14.257 [2024-07-25 08:51:21.199747] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:14.257 [2024-07-25 08:51:21.213007] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:14.257 [2024-07-25 08:51:21.213307] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:14.257 [2024-07-25 08:51:21.232647] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:14.257 [2024-07-25 08:51:21.232780] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:14.257 [2024-07-25 08:51:21.246067] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:14.257 [2024-07-25 08:51:21.246200] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:14.257 [2024-07-25 08:51:21.266373] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:14.257 [2024-07-25 08:51:21.296011] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(341) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:09:14.257 [2024-07-25 08:51:21.296150] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(8) ignore (ExpCmdSN=9, MaxCmdSN=71) 00:09:14.257 [2024-07-25 08:51:21.296666] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:09:14.257 [2024-07-25 08:51:21.331534] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:09:14.257 [2024-07-25 08:51:21.341497] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:09:14.257 PDU 00:09:14.257 00000000 00 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:09:14.257 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:09:14.257 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:09:14.257 [2024-07-25 08:51:21.341580] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:09:14.515 [2024-07-25 08:51:21.482440] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:16.501 [2024-07-25 08:51:23.442685] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:16.501 [2024-07-25 08:51:23.458733] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:09:16.501 [2024-07-25 08:51:23.458873] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:09:16.501 [2024-07-25 08:51:23.503957] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:09:16.501 [2024-07-25 08:51:23.542195] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:16.501 [2024-07-25 08:51:23.542354] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:16.501 [2024-07-25 08:51:23.575949] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:16.501 [2024-07-25 08:51:23.576090] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:16.501 [2024-07-25 08:51:23.590237] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:16.501 [2024-07-25 08:51:23.590417] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:16.758 [2024-07-25 08:51:23.643714] param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 276 00:09:16.758 [2024-07-25 08:51:23.643776] iscsi.c:1303:iscsi_op_login_store_incoming_params: *ERROR*: iscsi_parse_params() failed 00:09:16.758 [2024-07-25 08:51:23.659323] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:16.758 [2024-07-25 08:51:23.677340] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(3) error ExpCmdSN=4 00:09:16.758 [2024-07-25 08:51:23.677525] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:17.016 [2024-07-25 08:51:24.084764] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:17.016 [2024-07-25 08:51:24.107327] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:09:17.274 [2024-07-25 08:51:24.149977] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:17.274 [2024-07-25 08:51:24.150131] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:17.274 [2024-07-25 08:51:24.180028] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:17.274 [2024-07-25 08:51:24.180184] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:17.274 [2024-07-25 08:51:24.199335] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:17.274 [2024-07-25 08:51:24.199478] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:17.274 [2024-07-25 08:51:24.237609] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:17.274 [2024-07-25 08:51:24.237760] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:17.274 [2024-07-25 08:51:24.258096] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:17.274 [2024-07-25 08:51:24.279585] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:09:17.274 [2024-07-25 08:51:24.317095] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:17.274 [2024-07-25 08:51:24.356780] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:17.274 [2024-07-25 08:51:24.356939] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:17.274 [2024-07-25 08:51:24.376510] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:17.274 [2024-07-25 08:51:24.376660] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:09:17.274 [2024-07-25 08:51:24.377038] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:09:17.532 [2024-07-25 08:51:24.410089] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:17.532 [2024-07-25 08:51:24.466584] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:17.532 [2024-07-25 08:51:24.467375] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:17.532 [2024-07-25 08:51:24.489346] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:17.532 [2024-07-25 08:51:24.504694] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:17.532 [2024-07-25 08:51:24.566825] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:09:17.532 [2024-07-25 08:51:24.601133] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:17.790 [2024-07-25 08:51:24.678333] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:17.790 [2024-07-25 08:51:24.678504] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:17.791 [2024-07-25 08:51:24.701420] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:09:17.791 [2024-07-25 08:51:24.733971] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=ffffffff 00:09:17.791 [2024-07-25 08:51:24.754463] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:17.791 [2024-07-25 08:51:24.774866] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key ImmediateDataa 00:09:17.791 [2024-07-25 08:51:24.787534] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:09:17.791 [2024-07-25 08:51:24.808963] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:17.791 [2024-07-25 08:51:24.809328] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:17.791 [2024-07-25 08:51:24.830530] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:17.791 [2024-07-25 08:51:24.830683] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:17.791 [2024-07-25 08:51:24.853541] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:17.791 [2024-07-25 08:51:24.872581] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:09:17.791 [2024-07-25 08:51:24.908677] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:17.791 [2024-07-25 08:51:24.908840] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:18.049 [2024-07-25 08:51:24.930560] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:18.049 [2024-07-25 08:51:24.930742] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:18.049 [2024-07-25 08:51:25.024712] iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 2745410467, and the dataout task tag is 2728567458 00:09:18.049 [2024-07-25 08:51:25.024899] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:09:18.049 [2024-07-25 08:51:25.025101] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:09:18.049 [2024-07-25 08:51:25.025168] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:09:18.049 [2024-07-25 08:51:25.058519] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:09:18.049 [2024-07-25 08:51:25.078959] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:18.049 [2024-07-25 08:51:25.079106] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:18.049 [2024-07-25 08:51:25.100938] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:18.049 [2024-07-25 08:51:25.155138] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:18.308 [2024-07-25 08:51:25.176682] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:09:18.308 [2024-07-25 08:51:25.211356] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:18.308 [2024-07-25 08:51:25.232636] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:19.242 [2024-07-25 08:51:26.266684] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:20.177 [2024-07-25 08:51:27.254589] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=6, MaxCmdSN=68) 00:09:20.177 [2024-07-25 08:51:27.255100] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=7 00:09:20.178 [2024-07-25 08:51:27.266861] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=5, MaxCmdSN=68) 00:09:21.551 [2024-07-25 08:51:28.267062] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=6, MaxCmdSN=69) 00:09:21.551 [2024-07-25 08:51:28.267258] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=7, MaxCmdSN=70) 00:09:21.551 [2024-07-25 08:51:28.267298] iscsi.c:4028:iscsi_handle_status_snack: *ERROR*: Unable to find StatSN: 0x00000007. For a StatusSNACK, assuming this is a proactive SNACK for an untransmitted StatSN, ignoring. 00:09:21.551 [2024-07-25 08:51:28.267320] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=8 00:09:33.802 [2024-07-25 08:51:40.321470] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:09:33.802 [2024-07-25 08:51:40.342767] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:09:33.802 [2024-07-25 08:51:40.362813] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:09:33.802 [2024-07-25 08:51:40.363069] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:09:33.802 [2024-07-25 08:51:40.383848] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:09:33.802 [2024-07-25 08:51:40.403883] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:09:33.802 [2024-07-25 08:51:40.425849] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:09:33.802 [2024-07-25 08:51:40.466742] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:09:33.802 [2024-07-25 08:51:40.471488] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=64 00:09:33.802 [2024-07-25 08:51:40.481037] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1107296256) error ExpCmdSN=66 00:09:33.802 [2024-07-25 08:51:40.512732] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:09:33.802 [2024-07-25 08:51:40.521666] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=67 00:09:33.802 Skipping tc_ffp_15_2. It is known to fail. 00:09:33.802 Skipping tc_ffp_29_2. It is known to fail. 00:09:33.802 Skipping tc_ffp_29_3. It is known to fail. 00:09:33.802 Skipping tc_ffp_29_4. It is known to fail. 00:09:33.802 Skipping tc_err_1_1. It is known to fail. 00:09:33.802 Skipping tc_err_1_2. It is known to fail. 00:09:33.802 Skipping tc_err_2_8. It is known to fail. 00:09:33.802 Skipping tc_err_3_1. It is known to fail. 00:09:33.802 Skipping tc_err_3_2. It is known to fail. 00:09:33.802 Skipping tc_err_3_3. It is known to fail. 00:09:33.802 Skipping tc_err_3_4. It is known to fail. 00:09:33.802 Skipping tc_err_5_1. It is known to fail. 00:09:33.802 Skipping tc_login_3_1. It is known to fail. 00:09:33.802 Skipping tc_login_11_2. It is known to fail. 00:09:33.802 Skipping tc_login_11_4. It is known to fail. 00:09:33.802 Skipping tc_login_2_2. It is known to fail. 00:09:33.802 Skipping tc_login_29_1. It is known to fail. 00:09:33.802 Cleaning up iSCSI connection 00:09:33.802 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@62 -- # failed=0 00:09:33.802 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:33.802 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@67 -- # iscsicleanup 00:09:33.802 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:09:33.802 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:09:33.802 iscsiadm: No matching sessions found 00:09:33.802 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@983 -- # true 00:09:33.803 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:09:33.803 iscsiadm: No records found 00:09:33.803 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@984 -- # true 00:09:33.803 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@985 -- # rm -rf 00:09:33.803 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@68 -- # killprocess 63306 00:09:33.803 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@950 -- # '[' -z 63306 ']' 00:09:33.803 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # kill -0 63306 00:09:33.803 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@955 -- # uname 00:09:33.803 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.803 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63306 00:09:33.803 killing process with pid 63306 00:09:33.803 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:33.803 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:33.803 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63306' 00:09:33.803 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@969 -- # kill 63306 00:09:33.803 08:51:40 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@974 -- # wait 63306 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@69 -- # delete_tmp_conf_files 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@12 -- # rm -f /usr/local/etc/its.conf 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@70 -- # iscsitestfini 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@71 -- # exit 0 00:09:37.123 ************************************ 00:09:37.123 END TEST iscsi_tgt_calsoft 00:09:37.123 ************************************ 00:09:37.123 00:09:37.123 real 0m28.983s 00:09:37.123 user 0m45.082s 00:09:37.123 sys 0m2.352s 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:09:37.123 08:51:43 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@31 -- # run_test iscsi_tgt_filesystem /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:09:37.123 08:51:43 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:37.123 08:51:43 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.123 08:51:43 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:09:37.123 ************************************ 00:09:37.123 START TEST iscsi_tgt_filesystem 00:09:37.123 ************************************ 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:09:37.123 * Looking for test storage... 00:09:37.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/setup/common.sh 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:37.123 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:37.124 08:51:43 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:37.124 #define SPDK_CONFIG_H 00:09:37.124 #define SPDK_CONFIG_APPS 1 00:09:37.124 #define SPDK_CONFIG_ARCH native 00:09:37.124 #define SPDK_CONFIG_ASAN 1 00:09:37.124 #undef SPDK_CONFIG_AVAHI 00:09:37.124 #undef SPDK_CONFIG_CET 00:09:37.124 #define SPDK_CONFIG_COVERAGE 1 00:09:37.124 #define SPDK_CONFIG_CROSS_PREFIX 00:09:37.124 #undef SPDK_CONFIG_CRYPTO 00:09:37.124 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:37.124 #undef SPDK_CONFIG_CUSTOMOCF 00:09:37.124 #undef SPDK_CONFIG_DAOS 00:09:37.124 #define SPDK_CONFIG_DAOS_DIR 00:09:37.124 #define SPDK_CONFIG_DEBUG 1 00:09:37.124 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:37.124 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:37.124 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:37.124 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:37.124 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:37.124 #undef SPDK_CONFIG_DPDK_UADK 00:09:37.124 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:37.124 #define SPDK_CONFIG_EXAMPLES 1 00:09:37.124 #undef SPDK_CONFIG_FC 00:09:37.124 #define SPDK_CONFIG_FC_PATH 00:09:37.124 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:37.124 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:37.124 #undef SPDK_CONFIG_FUSE 00:09:37.124 #undef SPDK_CONFIG_FUZZER 00:09:37.124 #define SPDK_CONFIG_FUZZER_LIB 00:09:37.124 #undef SPDK_CONFIG_GOLANG 00:09:37.124 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:37.124 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:37.124 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:37.124 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:37.124 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:37.124 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:37.124 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:37.124 #define SPDK_CONFIG_IDXD 1 00:09:37.124 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:37.124 #undef SPDK_CONFIG_IPSEC_MB 00:09:37.124 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:37.124 #define SPDK_CONFIG_ISAL 1 00:09:37.124 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:37.124 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:37.124 #define SPDK_CONFIG_LIBDIR 00:09:37.124 #undef SPDK_CONFIG_LTO 00:09:37.124 #define SPDK_CONFIG_MAX_LCORES 128 00:09:37.124 #define SPDK_CONFIG_NVME_CUSE 1 00:09:37.124 #undef SPDK_CONFIG_OCF 00:09:37.124 #define SPDK_CONFIG_OCF_PATH 00:09:37.124 #define SPDK_CONFIG_OPENSSL_PATH 00:09:37.124 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:37.124 #define SPDK_CONFIG_PGO_DIR 00:09:37.124 #undef SPDK_CONFIG_PGO_USE 00:09:37.124 #define SPDK_CONFIG_PREFIX /usr/local 00:09:37.124 #undef SPDK_CONFIG_RAID5F 00:09:37.124 #define SPDK_CONFIG_RBD 1 00:09:37.124 #define SPDK_CONFIG_RDMA 1 00:09:37.124 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:37.124 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:37.124 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:37.124 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:37.124 #define SPDK_CONFIG_SHARED 1 00:09:37.124 #undef SPDK_CONFIG_SMA 00:09:37.124 #define SPDK_CONFIG_TESTS 1 00:09:37.124 #undef SPDK_CONFIG_TSAN 00:09:37.124 #define SPDK_CONFIG_UBLK 1 00:09:37.124 #define SPDK_CONFIG_UBSAN 1 00:09:37.124 #undef SPDK_CONFIG_UNIT_TESTS 00:09:37.124 #undef SPDK_CONFIG_URING 00:09:37.124 #define SPDK_CONFIG_URING_PATH 00:09:37.124 #undef SPDK_CONFIG_URING_ZNS 00:09:37.124 #undef SPDK_CONFIG_USDT 00:09:37.124 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:37.124 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:37.124 #undef SPDK_CONFIG_VFIO_USER 00:09:37.124 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:37.124 #define SPDK_CONFIG_VHOST 1 00:09:37.124 #define SPDK_CONFIG_VIRTIO 1 00:09:37.124 #undef SPDK_CONFIG_VTUNE 00:09:37.124 #define SPDK_CONFIG_VTUNE_DIR 00:09:37.124 #define SPDK_CONFIG_WERROR 1 00:09:37.124 #define SPDK_CONFIG_WPDK_DIR 00:09:37.124 #undef SPDK_CONFIG_XNVME 00:09:37.124 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # uname -s 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@58 -- # : 1 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@70 -- # : 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@76 -- # : 1 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@78 -- # : 1 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@86 -- # : 0 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:37.124 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@92 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@104 -- # : 1 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@120 -- # : 1 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@124 -- # : 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@138 -- # : 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@154 -- # : 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@169 -- # : 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@202 -- # cat 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j10 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@320 -- # [[ -z 64053 ]] 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@320 -- # kill -0 64053 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.0FYSRN 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem /tmp/spdk.0FYSRN/tests/filesystem /tmp/spdk.0FYSRN 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@329 -- # df -T 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=devtmpfs 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=4194304 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=4194304 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6263177216 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6267887616 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4710400 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=2496167936 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=2507157504 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10989568 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda5 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=btrfs 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=13767634944 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=20314062848 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5261365248 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda5 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=btrfs 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=13767634944 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=20314062848 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5261365248 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda2 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext4 00:09:37.125 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=843546624 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=1012768768 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=100016128 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda3 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=vfat 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=92499968 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=104607744 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=12107776 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6267744256 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6267887616 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=143360 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=1253572608 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=1253576704 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt/output 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=fuse.sshfs 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=92982259712 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=105088212992 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6720520192 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:09:37.126 * Looking for test storage... 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@374 -- # df /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@374 -- # mount=/home 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@376 -- # target_space=13767634944 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@382 -- # [[ btrfs == tmpfs ]] 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@382 -- # [[ btrfs == ramfs ]] 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@382 -- # [[ /home == / ]] 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:37.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@391 -- # return 0 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@11 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@13 -- # iscsitestinit 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@29 -- # timing_enter start_iscsi_tgt 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@32 -- # pid=64090 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@33 -- # echo 'Process pid: 64090' 00:09:37.126 Process pid: 64090 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@35 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@37 -- # waitforlisten 64090 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@831 -- # '[' -z 64090 ']' 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:37.126 08:51:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.383 [2024-07-25 08:51:44.295664] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:37.383 [2024-07-25 08:51:44.295922] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64090 ] 00:09:37.383 [2024-07-25 08:51:44.452299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:37.640 [2024-07-25 08:51:44.737869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.640 [2024-07-25 08:51:44.738027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.640 [2024-07-25 08:51:44.738125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.640 [2024-07-25 08:51:44.738180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.204 08:51:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:38.204 08:51:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@864 -- # return 0 00:09:38.204 08:51:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@38 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:09:38.204 08:51:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.204 08:51:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.204 08:51:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.204 08:51:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@39 -- # rpc_cmd framework_start_init 00:09:38.204 08:51:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.204 08:51:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.170 iscsi_tgt is listening. Running tests... 00:09:39.170 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.170 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@40 -- # echo 'iscsi_tgt is listening. Running tests...' 00:09:39.171 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@42 -- # timing_exit start_iscsi_tgt 00:09:39.171 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:39.171 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # get_first_nvme_bdf 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1524 -- # bdfs=() 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1524 -- # local bdfs 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # local bdfs 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # bdf=0000:00:10.0 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@45 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@46 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@47 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:00:10.0 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.428 Nvme0n1 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # rpc_cmd bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # ls_guid=9791eb56-ae1f-4512-a359-825384a8ca90 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # get_lvs_free_mb 9791eb56-ae1f-4512-a359-825384a8ca90 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1364 -- # local lvs_uuid=9791eb56-ae1f-4512-a359-825384a8ca90 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1365 -- # local lvs_info 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1366 -- # local fc 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1367 -- # local cs 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_lvol_get_lvstores 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:09:39.428 { 00:09:39.428 "uuid": "9791eb56-ae1f-4512-a359-825384a8ca90", 00:09:39.428 "name": "lvs_0", 00:09:39.428 "base_bdev": "Nvme0n1", 00:09:39.428 "total_data_clusters": 1278, 00:09:39.428 "free_clusters": 1278, 00:09:39.428 "block_size": 4096, 00:09:39.428 "cluster_size": 4194304 00:09:39.428 } 00:09:39.428 ]' 00:09:39.428 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="9791eb56-ae1f-4512-a359-825384a8ca90") .free_clusters' 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1369 -- # fc=1278 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="9791eb56-ae1f-4512-a359-825384a8ca90") .cluster_size' 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1370 -- # cs=4194304 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1373 -- # free_mb=5112 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1374 -- # echo 5112 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # free_mb=5112 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@52 -- # '[' 5112 -gt 2048 ']' 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@53 -- # rpc_cmd bdev_lvol_create -u 9791eb56-ae1f-4512-a359-825384a8ca90 lbd_0 2048 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.686 79a2984a-8335-4da7-ab67-89558e2d216e 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@61 -- # lvol_name=lvs_0/lbd_0 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@62 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias lvs_0/lbd_0:0 1:2 256 -d 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.686 08:51:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@63 -- # sleep 1 00:09:40.622 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@65 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:09:40.622 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:09:40.622 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@66 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:09:40.622 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:09:40.622 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:09:40.622 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@67 -- # waitforiscsidevices 1 00:09:40.622 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@116 -- # local num=1 00:09:40.622 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:09:40.622 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:09:40.622 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:09:40.622 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:09:40.622 [2024-07-25 08:51:47.735324] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:40.622 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # n=1 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@123 -- # return 0 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # get_bdev_size lvs_0/lbd_0 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1378 -- # local bdev_name=lvs_0/lbd_0 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1380 -- # local bs 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1381 -- # local nb 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b lvs_0/lbd_0 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:40.881 { 00:09:40.881 "name": "79a2984a-8335-4da7-ab67-89558e2d216e", 00:09:40.881 "aliases": [ 00:09:40.881 "lvs_0/lbd_0" 00:09:40.881 ], 00:09:40.881 "product_name": "Logical Volume", 00:09:40.881 "block_size": 4096, 00:09:40.881 "num_blocks": 524288, 00:09:40.881 "uuid": "79a2984a-8335-4da7-ab67-89558e2d216e", 00:09:40.881 "assigned_rate_limits": { 00:09:40.881 "rw_ios_per_sec": 0, 00:09:40.881 "rw_mbytes_per_sec": 0, 00:09:40.881 "r_mbytes_per_sec": 0, 00:09:40.881 "w_mbytes_per_sec": 0 00:09:40.881 }, 00:09:40.881 "claimed": false, 00:09:40.881 "zoned": false, 00:09:40.881 "supported_io_types": { 00:09:40.881 "read": true, 00:09:40.881 "write": true, 00:09:40.881 "unmap": true, 00:09:40.881 "flush": false, 00:09:40.881 "reset": true, 00:09:40.881 "nvme_admin": false, 00:09:40.881 "nvme_io": false, 00:09:40.881 "nvme_io_md": false, 00:09:40.881 "write_zeroes": true, 00:09:40.881 "zcopy": false, 00:09:40.881 "get_zone_info": false, 00:09:40.881 "zone_management": false, 00:09:40.881 "zone_append": false, 00:09:40.881 "compare": false, 00:09:40.881 "compare_and_write": false, 00:09:40.881 "abort": false, 00:09:40.881 "seek_hole": true, 00:09:40.881 "seek_data": true, 00:09:40.881 "copy": false, 00:09:40.881 "nvme_iov_md": false 00:09:40.881 }, 00:09:40.881 "driver_specific": { 00:09:40.881 "lvol": { 00:09:40.881 "lvol_store_uuid": "9791eb56-ae1f-4512-a359-825384a8ca90", 00:09:40.881 "base_bdev": "Nvme0n1", 00:09:40.881 "thin_provision": false, 00:09:40.881 "num_allocated_clusters": 512, 00:09:40.881 "snapshot": false, 00:09:40.881 "clone": false, 00:09:40.881 "esnap_clone": false 00:09:40.881 } 00:09:40.881 } 00:09:40.881 } 00:09:40.881 ]' 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1383 -- # bs=4096 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1384 -- # nb=524288 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1387 -- # bdev_size=2048 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1388 -- # echo 2048 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # lvol_size=2147483648 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@70 -- # trap 'iscsicleanup; remove_backends; umount /mnt/device; rm -rf /mnt/device; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@72 -- # mkdir -p /mnt/device 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # iscsiadm -m session -P 3 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # awk '{print $4}' 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # grep 'Attached scsi disk' 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # dev=sda 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@76 -- # waitforfile /dev/sda 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1265 -- # local i=0 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1276 -- # return 0 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # sec_size_to_bytes sda 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@76 -- # local dev=sda 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@78 -- # [[ -e /sys/block/sda ]] 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@80 -- # echo 2147483648 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # dev_size=2147483648 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@80 -- # (( lvol_size == dev_size )) 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@81 -- # parted -s /dev/sda mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:40.881 [2024-07-25 08:51:47.905137] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:40.881 08:51:47 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@82 -- # sleep 1 00:09:41.817 08:51:48 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@144 -- # run_test iscsi_tgt_filesystem_ext4 filesystem_test ext4 00:09:41.817 08:51:48 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:41.817 08:51:48 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.817 08:51:48 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.817 ************************************ 00:09:41.817 START TEST iscsi_tgt_filesystem_ext4 00:09:41.817 ************************************ 00:09:41.817 08:51:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1125 -- # filesystem_test ext4 00:09:41.817 08:51:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@89 -- # fstype=ext4 00:09:41.817 08:51:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@91 -- # make_filesystem ext4 /dev/sda1 00:09:41.817 08:51:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:09:41.817 08:51:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/sda1 00:09:41.817 08:51:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:09:41.817 08:51:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:09:41.817 08:51:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:09:41.817 08:51:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:09:41.817 08:51:48 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda1 00:09:41.817 mke2fs 1.46.5 (30-Dec-2021) 00:09:42.075 Discarding device blocks: 0/522240 done 00:09:42.075 Creating filesystem with 522240 4k blocks and 130560 inodes 00:09:42.075 Filesystem UUID: 25422895-51db-4a38-9fcc-2bbd84f00517 00:09:42.075 Superblock backups stored on blocks: 00:09:42.075 32768, 98304, 163840, 229376, 294912 00:09:42.075 00:09:42.075 Allocating group tables: 0/16 done 00:09:42.075 Writing inode tables: 0/16 done 00:09:42.075 Creating journal (8192 blocks): done 00:09:42.340 Writing superblocks and filesystem accounting information: 0/16 done 00:09:42.340 00:09:42.340 08:51:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:09:42.340 08:51:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:09:42.340 08:51:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:09:42.340 08:51:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:09:42.340 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:09:42.340 fio-3.35 00:09:42.340 Starting 1 thread 00:09:42.340 job0: Laying out IO file (1 file / 1024MiB) 00:10:00.415 00:10:00.415 job0: (groupid=0, jobs=1): err= 0: pid=64255: Thu Jul 25 08:52:04 2024 00:10:00.415 write: IOPS=17.2k, BW=67.1MiB/s (70.4MB/s)(1024MiB/15259msec); 0 zone resets 00:10:00.415 slat (usec): min=5, max=34569, avg=24.95, stdev=194.26 00:10:00.415 clat (usec): min=524, max=54579, avg=3698.83, stdev=2495.24 00:10:00.415 lat (usec): min=537, max=54598, avg=3723.79, stdev=2514.11 00:10:00.415 clat percentiles (usec): 00:10:00.415 | 1.00th=[ 1844], 5.00th=[ 2147], 10.00th=[ 2376], 20.00th=[ 2704], 00:10:00.415 | 30.00th=[ 2966], 40.00th=[ 3261], 50.00th=[ 3523], 60.00th=[ 3785], 00:10:00.415 | 70.00th=[ 4047], 80.00th=[ 4293], 90.00th=[ 4686], 95.00th=[ 5080], 00:10:00.415 | 99.00th=[ 6980], 99.50th=[12256], 99.90th=[43254], 99.95th=[44303], 00:10:00.415 | 99.99th=[52691] 00:10:00.415 bw ( KiB/s): min=58008, max=75192, per=99.82%, avg=68593.33, stdev=5071.48, samples=30 00:10:00.415 iops : min=14502, max=18798, avg=17148.40, stdev=1267.91, samples=30 00:10:00.415 lat (usec) : 750=0.01%, 1000=0.01% 00:10:00.415 lat (msec) : 2=2.89%, 4=65.86%, 10=30.63%, 20=0.23%, 50=0.38% 00:10:00.415 lat (msec) : 100=0.01% 00:10:00.415 cpu : usr=4.63%, sys=29.73%, ctx=27454, majf=0, minf=1 00:10:00.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:00.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:10:00.415 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.415 latency : target=0, window=0, percentile=100.00%, depth=64 00:10:00.415 00:10:00.415 Run status group 0 (all jobs): 00:10:00.415 WRITE: bw=67.1MiB/s (70.4MB/s), 67.1MiB/s-67.1MiB/s (70.4MB/s-70.4MB/s), io=1024MiB (1074MB), run=15259-15259msec 00:10:00.415 00:10:00.415 Disk stats (read/write): 00:10:00.415 sda: ios=0/258875, merge=0/2576, ticks=0/811185, in_queue=811184, util=99.34% 00:10:00.415 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:10:00.415 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:10:00.415 Logging out of session [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:00.415 Logout of [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:00.415 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=0 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:00.416 iscsiadm: No active sessions. 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # true 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=0 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:10:00.416 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:00.416 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=1 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:00.416 [2024-07-25 08:52:04.938671] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=1 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # dev=sda 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1265 -- # local i=0 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1276 -- # return 0 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:10:00.416 File existed. 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:10:00.416 08:52:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:10:00.416 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:10:00.416 fio-3.35 00:10:00.416 Starting 1 thread 00:10:18.502 00:10:18.502 job0: (groupid=0, jobs=1): err= 0: pid=64602: Thu Jul 25 08:52:25 2024 00:10:18.502 read: IOPS=18.2k, BW=71.0MiB/s (74.4MB/s)(1420MiB/20003msec) 00:10:18.502 slat (usec): min=2, max=3286, avg=11.22, stdev=39.66 00:10:18.502 clat (usec): min=821, max=22721, avg=3506.08, stdev=1051.77 00:10:18.502 lat (usec): min=832, max=24199, avg=3517.30, stdev=1056.78 00:10:18.502 clat percentiles (usec): 00:10:18.502 | 1.00th=[ 2040], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2507], 00:10:18.502 | 30.00th=[ 2900], 40.00th=[ 3130], 50.00th=[ 3458], 60.00th=[ 3654], 00:10:18.502 | 70.00th=[ 4047], 80.00th=[ 4293], 90.00th=[ 4686], 95.00th=[ 4883], 00:10:18.502 | 99.00th=[ 5473], 99.50th=[ 6325], 99.90th=[14091], 99.95th=[18482], 00:10:18.502 | 99.99th=[20579] 00:10:18.502 bw ( KiB/s): min=44096, max=81421, per=100.00%, avg=72829.67, stdev=4984.55, samples=39 00:10:18.502 iops : min=11024, max=20355, avg=18207.44, stdev=1246.13, samples=39 00:10:18.502 lat (usec) : 1000=0.03% 00:10:18.502 lat (msec) : 2=0.86%, 4=66.80%, 10=32.12%, 20=0.18%, 50=0.02% 00:10:18.502 cpu : usr=5.49%, sys=18.45%, ctx=33893, majf=0, minf=65 00:10:18.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:18.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:10:18.502 issued rwts: total=363579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.502 latency : target=0, window=0, percentile=100.00%, depth=64 00:10:18.502 00:10:18.502 Run status group 0 (all jobs): 00:10:18.502 READ: bw=71.0MiB/s (74.4MB/s), 71.0MiB/s-71.0MiB/s (74.4MB/s-74.4MB/s), io=1420MiB (1489MB), run=20003-20003msec 00:10:18.502 00:10:18.502 Disk stats (read/write): 00:10:18.502 sda: ios=360843/5, merge=1417/2, ticks=1176121/6, in_queue=1176128, util=99.64% 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:10:18.502 00:10:18.502 real 0m36.357s 00:10:18.502 user 0m2.072s 00:10:18.502 sys 0m8.513s 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:18.502 ************************************ 00:10:18.502 END TEST iscsi_tgt_filesystem_ext4 00:10:18.502 ************************************ 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@145 -- # run_test iscsi_tgt_filesystem_btrfs filesystem_test btrfs 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:18.502 ************************************ 00:10:18.502 START TEST iscsi_tgt_filesystem_btrfs 00:10:18.502 ************************************ 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1125 -- # filesystem_test btrfs 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@89 -- # fstype=btrfs 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@91 -- # make_filesystem btrfs /dev/sda1 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/sda1 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/sda1 00:10:18.502 btrfs-progs v6.6.2 00:10:18.502 See https://btrfs.readthedocs.io for more information. 00:10:18.502 00:10:18.502 Performing full device TRIM /dev/sda1 (1.99GiB) ... 00:10:18.502 NOTE: several default settings have changed in version 5.15, please make sure 00:10:18.502 this does not affect your deployments: 00:10:18.502 - DUP for metadata (-m dup) 00:10:18.502 - enabled no-holes (-O no-holes) 00:10:18.502 - enabled free-space-tree (-R free-space-tree) 00:10:18.502 00:10:18.502 Label: (null) 00:10:18.502 UUID: 04d770e2-5440-4594-9646-bc0fd5572b6c 00:10:18.502 Node size: 16384 00:10:18.502 Sector size: 4096 00:10:18.502 Filesystem size: 1.99GiB 00:10:18.502 Block group profiles: 00:10:18.502 Data: single 8.00MiB 00:10:18.502 Metadata: DUP 102.00MiB 00:10:18.502 System: DUP 8.00MiB 00:10:18.502 SSD detected: yes 00:10:18.502 Zoned device: no 00:10:18.502 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:18.502 Runtime features: free-space-tree 00:10:18.502 Checksum: crc32c 00:10:18.502 Number of devices: 1 00:10:18.502 Devices: 00:10:18.502 ID SIZE PATH 00:10:18.502 1 1.99GiB /dev/sda1 00:10:18.502 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:10:18.502 08:52:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:10:18.761 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:10:18.761 fio-3.35 00:10:18.761 Starting 1 thread 00:10:18.761 job0: Laying out IO file (1 file / 1024MiB) 00:10:36.936 00:10:36.936 job0: (groupid=0, jobs=1): err= 0: pid=64863: Thu Jul 25 08:52:42 2024 00:10:36.936 write: IOPS=15.6k, BW=61.0MiB/s (63.9MB/s)(1024MiB/16800msec); 0 zone resets 00:10:36.936 slat (usec): min=6, max=6457, avg=42.03, stdev=99.99 00:10:36.936 clat (usec): min=464, max=15704, avg=4057.29, stdev=1665.77 00:10:36.936 lat (usec): min=721, max=16303, avg=4099.32, stdev=1683.43 00:10:36.936 clat percentiles (usec): 00:10:36.936 | 1.00th=[ 1713], 5.00th=[ 2114], 10.00th=[ 2376], 20.00th=[ 2769], 00:10:36.936 | 30.00th=[ 3130], 40.00th=[ 3458], 50.00th=[ 3752], 60.00th=[ 4047], 00:10:36.936 | 70.00th=[ 4424], 80.00th=[ 4883], 90.00th=[ 6194], 95.00th=[ 7570], 00:10:36.936 | 99.00th=[10028], 99.50th=[10814], 99.90th=[12780], 99.95th=[13304], 00:10:36.936 | 99.99th=[14484] 00:10:36.936 bw ( KiB/s): min=51552, max=70032, per=99.70%, avg=62225.58, stdev=4618.89, samples=33 00:10:36.936 iops : min=12888, max=17508, avg=15556.33, stdev=1154.79, samples=33 00:10:36.936 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:36.936 lat (msec) : 2=3.51%, 4=54.49%, 10=41.01%, 20=0.99% 00:10:36.936 cpu : usr=3.88%, sys=34.17%, ctx=53485, majf=0, minf=1 00:10:36.936 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:36.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:10:36.936 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.936 latency : target=0, window=0, percentile=100.00%, depth=64 00:10:36.936 00:10:36.936 Run status group 0 (all jobs): 00:10:36.936 WRITE: bw=61.0MiB/s (63.9MB/s), 61.0MiB/s-61.0MiB/s (63.9MB/s-63.9MB/s), io=1024MiB (1074MB), run=16800-16800msec 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:10:36.936 Logging out of session [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:36.936 Logout of [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:36.936 iscsiadm: No active sessions. 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # true 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=0 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:10:36.936 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:36.936 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:36.936 [2024-07-25 08:52:42.860405] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=1 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:10:36.936 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:10:36.937 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # dev=sda 00:10:36.937 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:10:36.937 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1265 -- # local i=0 00:10:36.937 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:10:36.937 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:10:36.937 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1276 -- # return 0 00:10:36.937 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:10:36.937 File existed. 00:10:36.937 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:10:36.937 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:10:36.937 08:52:42 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:10:36.937 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:10:36.937 fio-3.35 00:10:36.937 Starting 1 thread 00:10:58.863 00:10:58.863 job0: (groupid=0, jobs=1): err= 0: pid=65120: Thu Jul 25 08:53:03 2024 00:10:58.863 read: IOPS=17.5k, BW=68.2MiB/s (71.5MB/s)(1364MiB/20003msec) 00:10:58.863 slat (usec): min=4, max=3153, avg=14.85, stdev=19.38 00:10:58.863 clat (usec): min=1139, max=29688, avg=3646.87, stdev=1033.53 00:10:58.863 lat (usec): min=1206, max=30904, avg=3661.72, stdev=1038.04 00:10:58.863 clat percentiles (usec): 00:10:58.863 | 1.00th=[ 2147], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2671], 00:10:58.863 | 30.00th=[ 2999], 40.00th=[ 3294], 50.00th=[ 3589], 60.00th=[ 3884], 00:10:58.863 | 70.00th=[ 4178], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 5145], 00:10:58.863 | 99.00th=[ 5735], 99.50th=[ 5997], 99.90th=[ 9765], 99.95th=[17171], 00:10:58.863 | 99.99th=[24773] 00:10:58.863 bw ( KiB/s): min=55728, max=75040, per=100.00%, avg=69834.87, stdev=3404.88, samples=39 00:10:58.863 iops : min=13932, max=18760, avg=17458.72, stdev=851.24, samples=39 00:10:58.863 lat (msec) : 2=0.44%, 4=61.85%, 10=37.62%, 20=0.07%, 50=0.03% 00:10:58.863 cpu : usr=5.81%, sys=25.31%, ctx=51787, majf=0, minf=65 00:10:58.863 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:58.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:10:58.863 issued rwts: total=349181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.863 latency : target=0, window=0, percentile=100.00%, depth=64 00:10:58.863 00:10:58.863 Run status group 0 (all jobs): 00:10:58.863 READ: bw=68.2MiB/s (71.5MB/s), 68.2MiB/s-68.2MiB/s (71.5MB/s-71.5MB/s), io=1364MiB (1430MB), run=20003-20003msec 00:10:58.863 08:53:03 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:10:58.863 08:53:03 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:10:58.863 00:10:58.863 real 0m37.903s 00:10:58.863 user 0m2.088s 00:10:58.863 sys 0m11.188s 00:10:58.863 08:53:03 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.863 08:53:03 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:58.863 ************************************ 00:10:58.863 END TEST iscsi_tgt_filesystem_btrfs 00:10:58.863 ************************************ 00:10:58.863 08:53:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@146 -- # run_test iscsi_tgt_filesystem_xfs filesystem_test xfs 00:10:58.863 08:53:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:58.863 08:53:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.863 08:53:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.863 ************************************ 00:10:58.863 START TEST iscsi_tgt_filesystem_xfs 00:10:58.863 ************************************ 00:10:58.863 08:53:03 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1125 -- # filesystem_test xfs 00:10:58.863 08:53:03 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@89 -- # fstype=xfs 00:10:58.863 08:53:03 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@91 -- # make_filesystem xfs /dev/sda1 00:10:58.863 08:53:03 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:58.863 08:53:03 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/sda1 00:10:58.863 08:53:03 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:58.864 08:53:03 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:58.864 08:53:03 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:58.864 08:53:03 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:58.864 08:53:03 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/sda1 00:10:58.864 meta-data=/dev/sda1 isize=512 agcount=4, agsize=130560 blks 00:10:58.864 = sectsz=4096 attr=2, projid32bit=1 00:10:58.864 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:58.864 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:58.864 data = bsize=4096 blocks=522240, imaxpct=25 00:10:58.864 = sunit=0 swidth=0 blks 00:10:58.864 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:58.864 log =internal log bsize=4096 blocks=16384, version=2 00:10:58.864 = sectsz=4096 sunit=1 blks, lazy-count=1 00:10:58.864 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:58.864 Discarding blocks...Done. 00:10:58.864 08:53:03 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:58.864 08:53:03 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:10:58.864 08:53:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:10:58.864 08:53:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:10:58.864 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:10:58.864 fio-3.35 00:10:58.864 Starting 1 thread 00:10:58.864 job0: Laying out IO file (1 file / 1024MiB) 00:11:13.753 00:11:13.753 job0: (groupid=0, jobs=1): err= 0: pid=65379: Thu Jul 25 08:53:19 2024 00:11:13.753 write: IOPS=17.4k, BW=68.0MiB/s (71.3MB/s)(1024MiB/15053msec); 0 zone resets 00:11:13.753 slat (usec): min=2, max=3222, avg=21.02, stdev=104.60 00:11:13.753 clat (usec): min=867, max=14858, avg=3652.27, stdev=932.05 00:11:13.753 lat (usec): min=877, max=14865, avg=3673.29, stdev=935.22 00:11:13.753 clat percentiles (usec): 00:11:13.753 | 1.00th=[ 1909], 5.00th=[ 2180], 10.00th=[ 2474], 20.00th=[ 2769], 00:11:13.753 | 30.00th=[ 3130], 40.00th=[ 3425], 50.00th=[ 3654], 60.00th=[ 3916], 00:11:13.753 | 70.00th=[ 4113], 80.00th=[ 4424], 90.00th=[ 4817], 95.00th=[ 5211], 00:11:13.753 | 99.00th=[ 5800], 99.50th=[ 6128], 99.90th=[ 8160], 99.95th=[ 9372], 00:11:13.753 | 99.99th=[11863] 00:11:13.753 bw ( KiB/s): min=65296, max=75608, per=100.00%, avg=69708.53, stdev=1737.08, samples=30 00:11:13.753 iops : min=16324, max=18902, avg=17427.07, stdev=434.27, samples=30 00:11:13.753 lat (usec) : 1000=0.01% 00:11:13.753 lat (msec) : 2=2.15%, 4=62.43%, 10=35.38%, 20=0.03% 00:11:13.753 cpu : usr=4.36%, sys=16.78%, ctx=25070, majf=0, minf=1 00:11:13.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:11:13.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:13.753 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.753 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:13.753 00:11:13.753 Run status group 0 (all jobs): 00:11:13.753 WRITE: bw=68.0MiB/s (71.3MB/s), 68.0MiB/s-68.0MiB/s (71.3MB/s-71.3MB/s), io=1024MiB (1074MB), run=15053-15053msec 00:11:13.753 00:11:13.753 Disk stats (read/write): 00:11:13.753 sda: ios=0/258202, merge=0/643, ticks=0/811175, in_queue=811175, util=99.47% 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:11:13.753 Logging out of session [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:13.753 Logout of [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:13.753 iscsiadm: No active sessions. 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # true 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=0 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:11:13.753 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:13.753 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:13.753 [2024-07-25 08:53:19.736989] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:13.753 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=1 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # dev=sda 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1265 -- # local i=0 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1276 -- # return 0 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:11:13.754 File existed. 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:11:13.754 08:53:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:11:13.754 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:11:13.754 fio-3.35 00:11:13.754 Starting 1 thread 00:11:35.761 00:11:35.761 job0: (groupid=0, jobs=1): err= 0: pid=65590: Thu Jul 25 08:53:40 2024 00:11:35.761 read: IOPS=18.5k, BW=72.1MiB/s (75.6MB/s)(1442MiB/20003msec) 00:11:35.761 slat (usec): min=2, max=864, avg=10.22, stdev=10.78 00:11:35.761 clat (usec): min=848, max=8199, avg=3455.22, stdev=865.14 00:11:35.761 lat (usec): min=1092, max=8207, avg=3465.44, stdev=864.75 00:11:35.761 clat percentiles (usec): 00:11:35.761 | 1.00th=[ 2057], 5.00th=[ 2212], 10.00th=[ 2278], 20.00th=[ 2540], 00:11:35.761 | 30.00th=[ 2868], 40.00th=[ 3130], 50.00th=[ 3425], 60.00th=[ 3621], 00:11:35.761 | 70.00th=[ 4015], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 4817], 00:11:35.761 | 99.00th=[ 5276], 99.50th=[ 5407], 99.90th=[ 5997], 99.95th=[ 6259], 00:11:35.761 | 99.99th=[ 7046] 00:11:35.761 bw ( KiB/s): min=69632, max=79632, per=100.00%, avg=73935.95, stdev=1441.07, samples=39 00:11:35.761 iops : min=17408, max=19908, avg=18483.97, stdev=360.28, samples=39 00:11:35.761 lat (usec) : 1000=0.01% 00:11:35.761 lat (msec) : 2=0.70%, 4=69.15%, 10=30.15% 00:11:35.761 cpu : usr=5.68%, sys=18.76%, ctx=33686, majf=0, minf=65 00:11:35.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:11:35.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:35.761 issued rwts: total=369249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.761 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:35.761 00:11:35.761 Run status group 0 (all jobs): 00:11:35.761 READ: bw=72.1MiB/s (75.6MB/s), 72.1MiB/s-72.1MiB/s (75.6MB/s-75.6MB/s), io=1442MiB (1512MB), run=20003-20003msec 00:11:35.761 00:11:35.761 Disk stats (read/write): 00:11:35.761 sda: ios=365816/0, merge=1417/0, ticks=1199734/0, in_queue=1199735, util=99.62% 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:11:35.761 00:11:35.761 real 0m36.834s 00:11:35.761 user 0m2.063s 00:11:35.761 sys 0m6.547s 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:35.761 ************************************ 00:11:35.761 END TEST iscsi_tgt_filesystem_xfs 00:11:35.761 ************************************ 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@148 -- # rm -rf /mnt/device 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@152 -- # iscsicleanup 00:11:35.761 Cleaning up iSCSI connection 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:11:35.761 Logging out of session [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:35.761 Logout of [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@985 -- # rm -rf 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@153 -- # remove_backends 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@17 -- # echo 'INFO: Removing lvol bdev' 00:11:35.761 INFO: Removing lvol bdev 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@18 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.761 [2024-07-25 08:53:40.267711] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (79a2984a-8335-4da7-ab67-89558e2d216e) received event(SPDK_BDEV_EVENT_REMOVE) 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@20 -- # echo 'INFO: Removing lvol stores' 00:11:35.761 INFO: Removing lvol stores 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@21 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.761 INFO: Removing NVMe 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@23 -- # echo 'INFO: Removing NVMe' 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@24 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@26 -- # return 0 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@154 -- # killprocess 64090 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@950 -- # '[' -z 64090 ']' 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # kill -0 64090 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@955 -- # uname 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:35.761 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64090 00:11:35.762 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:35.762 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:35.762 killing process with pid 64090 00:11:35.762 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64090' 00:11:35.762 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@969 -- # kill 64090 00:11:35.762 08:53:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@974 -- # wait 64090 00:11:35.762 08:53:42 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@155 -- # iscsitestfini 00:11:35.762 08:53:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:35.762 00:11:35.762 real 1m58.941s 00:11:35.762 user 7m36.811s 00:11:35.762 sys 0m38.244s 00:11:35.762 08:53:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:35.762 08:53:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.762 ************************************ 00:11:35.762 END TEST iscsi_tgt_filesystem 00:11:35.762 ************************************ 00:11:35.762 08:53:42 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@32 -- # run_test chap_during_discovery /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:11:35.762 08:53:42 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:35.762 08:53:42 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.762 08:53:42 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:35.762 ************************************ 00:11:35.762 START TEST chap_during_discovery 00:11:35.762 ************************************ 00:11:35.762 08:53:42 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:11:36.022 * Looking for test storage... 00:11:36.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:36.022 08:53:42 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@13 -- # USER=chapo 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@14 -- # MUSER=mchapo 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@15 -- # PASS=123456789123 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@16 -- # MPASS=321978654321 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@19 -- # iscsitestinit 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@21 -- # set_up_iscsi_target 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@142 -- # pid=65903 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 65903' 00:11:36.022 iSCSI target launched. pid: 65903 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@145 -- # waitforlisten 65903 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@831 -- # '[' -z 65903 ']' 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:36.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:36.022 08:53:43 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.022 [2024-07-25 08:53:43.132251] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:36.022 [2024-07-25 08:53:43.132421] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65903 ] 00:11:36.589 [2024-07-25 08:53:43.409541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.589 [2024-07-25 08:53:43.649386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.156 08:53:43 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:37.156 08:53:43 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:37.156 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:11:37.157 08:53:43 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.157 08:53:43 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.157 08:53:43 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.157 08:53:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:11:37.157 08:53:43 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.157 08:53:43 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.740 08:53:44 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.740 iscsi_tgt is listening. Running tests... 00:11:37.740 08:53:44 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:37.740 08:53:44 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:11:37.740 08:53:44 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:37.740 08:53:44 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.015 08:53:44 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:11:38.015 08:53:44 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.015 08:53:44 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.015 08:53:44 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.015 08:53:44 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:11:38.015 08:53:44 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.015 08:53:44 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.015 08:53:44 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.015 08:53:44 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:11:38.015 08:53:44 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.015 08:53:44 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.015 Malloc0 00:11:38.015 08:53:44 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.015 08:53:44 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:11:38.015 08:53:44 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.015 08:53:44 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.015 08:53:45 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.015 08:53:45 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@155 -- # sleep 1 00:11:38.953 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:38.953 configuring target for bideerctional authentication 00:11:38.953 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@24 -- # echo 'configuring target for bideerctional authentication' 00:11:38.953 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:11:38.953 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:11:38.953 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:11:38.953 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:11:38.953 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:11:38.953 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:11:38.953 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:11:38.953 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:11:38.953 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:11:38.953 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:11:38.953 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:11:38.953 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:38.953 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:38.953 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@95 -- # '[' 0 -eq 1 ']' 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 1 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.954 executing discovery without adding credential to initiator - we expect failure 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@27 -- # rc=0 00:11:38.954 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:11:39.214 iscsiadm: Login failed to authenticate with target 00:11:39.214 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:11:39.214 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # rc=24 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@29 -- # '[' 24 -eq 0 ']' 00:11:39.214 configuring initiator for bideerctional authentication 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@35 -- # echo 'configuring initiator for bideerctional authentication' 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@36 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:11:39.214 iscsiadm: No matching sessions found 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:11:39.214 iscsiadm: No records found 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # true 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:11:39.214 08:53:46 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:11:42.518 08:53:49 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:11:42.518 08:53:49 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:11:43.455 08:53:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:43.455 08:53:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@116 -- # '[' 0 -eq 1 ']' 00:11:43.455 08:53:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:11:43.455 08:53:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:43.455 08:53:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:11:43.455 08:53:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:11:43.455 08:53:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:11:43.455 08:53:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:11:43.455 08:53:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:11:43.455 08:53:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:11:43.455 08:53:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:11:43.455 08:53:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@135 -- # restart_iscsid 00:11:43.455 08:53:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:11:46.744 08:53:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:11:46.744 08:53:53 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:11:47.313 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:11:47.313 executing discovery with adding credential to initiator 00:11:47.313 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@37 -- # echo 'executing discovery with adding credential to initiator' 00:11:47.313 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@38 -- # rc=0 00:11:47.313 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@39 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:11:47.313 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:11:47.313 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@40 -- # '[' 0 -ne 0 ']' 00:11:47.313 DONE 00:11:47.313 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@44 -- # echo DONE 00:11:47.313 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@45 -- # default_initiator_chap_credentials 00:11:47.313 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:11:47.313 iscsiadm: No matching sessions found 00:11:47.313 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:11:47.313 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:11:47.313 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:47.313 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:47.313 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:47.572 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:47.572 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:47.572 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:47.572 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:47.572 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:47.572 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:47.572 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:47.572 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:11:47.572 08:53:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:11:50.853 08:53:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:11:50.853 08:53:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:11:51.787 08:53:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:51.787 08:53:58 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:11:51.787 08:53:58 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@49 -- # killprocess 65903 00:11:51.787 08:53:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@950 -- # '[' -z 65903 ']' 00:11:51.787 08:53:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # kill -0 65903 00:11:51.787 08:53:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@955 -- # uname 00:11:51.787 08:53:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:51.787 08:53:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65903 00:11:51.787 08:53:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:51.787 08:53:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:51.787 killing process with pid 65903 00:11:51.787 08:53:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65903' 00:11:51.787 08:53:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@969 -- # kill 65903 00:11:51.787 08:53:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@974 -- # wait 65903 00:11:55.070 08:54:01 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@51 -- # iscsitestfini 00:11:55.070 08:54:01 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:55.070 00:11:55.070 real 0m18.663s 00:11:55.071 user 0m18.428s 00:11:55.071 sys 0m0.947s 00:11:55.071 08:54:01 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.071 08:54:01 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.071 ************************************ 00:11:55.071 END TEST chap_during_discovery 00:11:55.071 ************************************ 00:11:55.071 08:54:01 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@33 -- # run_test chap_mutual_auth /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:11:55.071 08:54:01 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:55.071 08:54:01 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.071 08:54:01 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:55.071 ************************************ 00:11:55.071 START TEST chap_mutual_auth 00:11:55.071 ************************************ 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:11:55.071 * Looking for test storage... 00:11:55.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@13 -- # USER=chapo 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@14 -- # MUSER=mchapo 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@15 -- # PASS=123456789123 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@16 -- # MPASS=321978654321 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@19 -- # iscsitestinit 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@21 -- # set_up_iscsi_target 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@142 -- # pid=66208 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 66208' 00:11:55.071 iSCSI target launched. pid: 66208 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@145 -- # waitforlisten 66208 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@831 -- # '[' -z 66208 ']' 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:55.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:55.071 08:54:01 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:55.071 [2024-07-25 08:54:01.842197] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:55.071 [2024-07-25 08:54:01.842808] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66208 ] 00:11:55.071 [2024-07-25 08:54:02.124828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.331 [2024-07-25 08:54:02.357012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.590 08:54:02 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:55.590 08:54:02 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@864 -- # return 0 00:11:55.590 08:54:02 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:11:55.590 08:54:02 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.590 08:54:02 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:55.590 08:54:02 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.590 08:54:02 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:11:55.590 08:54:02 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.590 08:54:02 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:56.544 08:54:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.544 iscsi_tgt is listening. Running tests... 00:11:56.544 08:54:03 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:56.544 08:54:03 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:11:56.544 08:54:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:56.544 08:54:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:56.544 08:54:03 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:11:56.544 08:54:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.544 08:54:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:56.544 08:54:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.544 08:54:03 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:11:56.544 08:54:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.544 08:54:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:56.544 08:54:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.544 08:54:03 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:11:56.544 08:54:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.544 08:54:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:56.809 Malloc0 00:11:56.809 08:54:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.809 08:54:03 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:11:56.809 08:54:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.809 08:54:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:56.809 08:54:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.809 08:54:03 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@155 -- # sleep 1 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:57.744 configuring target for authentication 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@24 -- # echo 'configuring target for authentication' 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 0 -eq 1 ']' 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@99 -- # rpc_cmd iscsi_target_node_set_auth -g 1 -r iqn.2016-06.io.spdk:disk1 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 0 -eq 1 ']' 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@106 -- # rpc_cmd iscsi_set_discovery_auth -r -g 1 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.744 executing discovery without adding credential to initiator - we expect failure 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:11:57.744 configuring initiator with biderectional authentication 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@28 -- # echo 'configuring initiator with biderectional authentication' 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@29 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:11:57.744 iscsiadm: No matching sessions found 00:11:57.744 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # true 00:11:57.745 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:11:57.745 iscsiadm: No records found 00:11:57.745 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # true 00:11:57.745 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:57.745 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:57.745 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:57.745 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:57.745 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:58.003 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:58.003 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:58.003 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:58.003 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:58.003 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:58.003 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:11:58.003 08:54:04 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:12:01.290 08:54:07 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:12:01.290 08:54:07 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:12:01.858 08:54:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:01.858 08:54:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@116 -- # '[' 1 -eq 1 ']' 00:12:01.858 08:54:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@117 -- # sed -i 's/#node.session.auth.authmethod = CHAP/node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:12:02.118 08:54:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@118 -- # sed -i 's/#node.session.auth.username =.*/node.session.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:12:02.118 08:54:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@119 -- # sed -i 's/#node.session.auth.password =.*/node.session.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:12:02.118 08:54:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' 1 -eq 1 ']' 00:12:02.118 08:54:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n 321978654321 ']' 00:12:02.118 08:54:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n mchapo ']' 00:12:02.118 08:54:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@121 -- # sed -i 's/#node.session.auth.username_in =.*/node.session.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:12:02.118 08:54:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@122 -- # sed -i 's/#node.session.auth.password_in =.*/node.session.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:12:02.118 08:54:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:12:02.118 08:54:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:12:02.118 08:54:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:12:02.118 08:54:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:12:02.118 08:54:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:12:02.118 08:54:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:12:02.118 08:54:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:12:02.118 08:54:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:12:02.118 08:54:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:12:02.118 08:54:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@135 -- # restart_iscsid 00:12:02.118 08:54:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:12:05.406 08:54:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:12:05.406 08:54:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:12:06.358 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:12:06.358 executing discovery - target should not be discovered since the -m option was not used 00:12:06.358 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@30 -- # echo 'executing discovery - target should not be discovered since the -m option was not used' 00:12:06.358 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@31 -- # rc=0 00:12:06.358 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:06.358 [2024-07-25 08:54:13.152027] iscsi.c: 982:iscsi_auth_params: *ERROR*: Initiator wants to use mutual CHAP for security, but it's not enabled. 00:12:06.358 [2024-07-25 08:54:13.152093] iscsi.c:1957:iscsi_op_login_rsp_handle_csg_bit: *ERROR*: iscsi_auth_params() failed 00:12:06.358 iscsiadm: Login failed to authenticate with target 00:12:06.358 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:12:06.358 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:12:06.358 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # rc=24 00:12:06.358 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@33 -- # '[' 24 -eq 0 ']' 00:12:06.358 configuring target for authentication with the -m option 00:12:06.358 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@37 -- # echo 'configuring target for authentication with the -m option' 00:12:06.358 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@38 -- # config_chap_credentials_for_target -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:12:06.358 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:12:06.358 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=2 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 2 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 2 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 1 -eq 1 ']' 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@97 -- # rpc_cmd iscsi_target_node_set_auth -g 2 -r -m iqn.2016-06.io.spdk:disk1 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 2 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.359 executing discovery: 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@39 -- # echo 'executing discovery:' 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@40 -- # rc=0 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@41 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:06.359 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@42 -- # '[' 0 -ne 0 ']' 00:12:06.359 executing login: 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@46 -- # echo 'executing login:' 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@47 -- # rc=0 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@48 -- # iscsiadm -m node -l -p 10.0.0.1:3260 00:12:06.359 Logging in to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:12:06.359 Login to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@49 -- # '[' 0 -ne 0 ']' 00:12:06.359 DONE 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@54 -- # echo DONE 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@55 -- # default_initiator_chap_credentials 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:12:06.359 [2024-07-25 08:54:13.287929] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:06.359 Logging out of session [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:12:06.359 Logout of [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:12:06.359 08:54:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:12:09.645 08:54:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:12:09.645 08:54:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:12:10.580 08:54:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:10.580 08:54:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@57 -- # trap - SIGINT SIGTERM EXIT 00:12:10.580 08:54:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@59 -- # killprocess 66208 00:12:10.580 08:54:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@950 -- # '[' -z 66208 ']' 00:12:10.580 08:54:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # kill -0 66208 00:12:10.580 08:54:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@955 -- # uname 00:12:10.580 08:54:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:10.580 08:54:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66208 00:12:10.580 08:54:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:10.580 08:54:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:10.580 killing process with pid 66208 00:12:10.580 08:54:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66208' 00:12:10.580 08:54:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@969 -- # kill 66208 00:12:10.580 08:54:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@974 -- # wait 66208 00:12:13.870 08:54:20 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@61 -- # iscsitestfini 00:12:13.870 08:54:20 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:12:13.870 00:12:13.870 real 0m18.898s 00:12:13.870 user 0m18.598s 00:12:13.870 sys 0m1.046s 00:12:13.870 08:54:20 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:13.870 08:54:20 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:12:13.870 ************************************ 00:12:13.870 END TEST chap_mutual_auth 00:12:13.870 ************************************ 00:12:13.870 08:54:20 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@34 -- # run_test iscsi_tgt_reset /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:12:13.870 08:54:20 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:13.870 08:54:20 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:13.870 08:54:20 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:13.870 ************************************ 00:12:13.870 START TEST iscsi_tgt_reset 00:12:13.870 ************************************ 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:12:13.870 * Looking for test storage... 00:12:13.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@11 -- # iscsitestinit 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@18 -- # hash sg_reset 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@22 -- # timing_enter start_iscsi_tgt 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@25 -- # pid=66546 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@26 -- # echo 'Process pid: 66546' 00:12:13.870 Process pid: 66546 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@28 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@30 -- # waitforlisten 66546 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@831 -- # '[' -z 66546 ']' 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:13.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:13.870 08:54:20 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:13.870 [2024-07-25 08:54:20.823545] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:13.870 [2024-07-25 08:54:20.823708] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66546 ] 00:12:14.139 [2024-07-25 08:54:20.988478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.139 [2024-07-25 08:54:21.239418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.705 08:54:21 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:14.705 08:54:21 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@864 -- # return 0 00:12:14.705 08:54:21 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@31 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:12:14.705 08:54:21 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.706 08:54:21 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:14.706 08:54:21 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.706 08:54:21 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@32 -- # rpc_cmd framework_start_init 00:12:14.706 08:54:21 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.706 08:54:21 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:15.643 iscsi_tgt is listening. Running tests... 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@33 -- # echo 'iscsi_tgt is listening. Running tests...' 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@35 -- # timing_exit start_iscsi_tgt 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@37 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@38 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@39 -- # rpc_cmd bdev_malloc_create 64 512 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:15.643 Malloc0 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@44 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.643 08:54:22 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@45 -- # sleep 1 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@47 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:17.019 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@48 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:12:17.019 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:17.019 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@49 -- # waitforiscsidevices 1 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@116 -- # local num=1 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:12:17.019 [2024-07-25 08:54:23.828091] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # n=1 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@123 -- # return 0 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # iscsiadm -m session -P 3 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # grep 'Attached scsi disk' 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # awk '{print $4}' 00:12:17.019 FIO pid: 66620 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # dev=sda 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@54 -- # fiopid=66620 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@55 -- # echo 'FIO pid: 66620' 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 60 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@57 -- # trap 'iscsicleanup; killprocess $pid; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:12:17.019 08:54:23 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:12:17.019 [global] 00:12:17.019 thread=1 00:12:17.019 invalidate=1 00:12:17.019 rw=read 00:12:17.019 time_based=1 00:12:17.019 runtime=60 00:12:17.019 ioengine=libaio 00:12:17.019 direct=1 00:12:17.019 bs=512 00:12:17.019 iodepth=1 00:12:17.019 norandommap=1 00:12:17.019 numjobs=1 00:12:17.019 00:12:17.019 [job0] 00:12:17.019 filename=/dev/sda 00:12:17.019 queue_depth set to 113 (sda) 00:12:17.019 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:12:17.019 fio-3.35 00:12:17.019 Starting 1 thread 00:12:17.956 08:54:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 66546 00:12:17.956 08:54:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 66620 00:12:17.956 08:54:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:12:17.956 [2024-07-25 08:54:24.869168] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:12:17.956 [2024-07-25 08:54:24.869292] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:12:17.956 [2024-07-25 08:54:24.870169] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.956 08:54:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:12:18.893 08:54:25 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 66546 00:12:18.893 08:54:25 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 66620 00:12:18.893 08:54:25 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:12:18.893 08:54:25 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:12:19.830 08:54:26 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 66546 00:12:19.830 08:54:26 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 66620 00:12:19.830 08:54:26 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:12:19.830 [2024-07-25 08:54:26.882971] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:12:19.830 [2024-07-25 08:54:26.883056] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:12:19.830 [2024-07-25 08:54:26.884211] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:19.830 08:54:26 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:12:20.835 08:54:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 66546 00:12:20.835 08:54:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 66620 00:12:20.835 08:54:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:12:20.835 08:54:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:12:22.211 08:54:28 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 66546 00:12:22.211 08:54:28 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 66620 00:12:22.211 08:54:28 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:12:22.211 [2024-07-25 08:54:28.898651] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:12:22.211 [2024-07-25 08:54:28.898776] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:12:22.211 [2024-07-25 08:54:28.899740] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.211 08:54:28 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 66546 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 66620 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@70 -- # kill 66620 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # wait 66620 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # true 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@73 -- # trap - SIGINT SIGTERM EXIT 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@75 -- # iscsicleanup 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:12:23.148 Cleaning up iSCSI connection 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:12:23.148 fio: pid=66645, err=19/file:io_u.c:1889, func=io_u error, error=No such device 00:12:23.148 fio: io_u error on file /dev/sda: No such device: read offset=44774400, buflen=512 00:12:23.148 Logging out of session [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:23.148 Logout of [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:23.148 00:12:23.148 job0: (groupid=0, jobs=1): err=19 (file:io_u.c:1889, func=io_u error, error=No such device): pid=66645: Thu Jul 25 08:54:29 2024 00:12:23.148 read: IOPS=15.2k, BW=7581KiB/s (7763kB/s)(42.7MiB/5768msec) 00:12:23.148 slat (usec): min=2, max=1561, avg= 4.95, stdev= 5.55 00:12:23.148 clat (nsec): min=932, max=2173.9k, avg=60470.07, stdev=15802.17 00:12:23.148 lat (usec): min=48, max=2179, avg=65.40, stdev=16.15 00:12:23.148 clat percentiles (usec): 00:12:23.148 | 1.00th=[ 49], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 56], 00:12:23.148 | 30.00th=[ 58], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 61], 00:12:23.148 | 70.00th=[ 62], 80.00th=[ 64], 90.00th=[ 71], 95.00th=[ 76], 00:12:23.148 | 99.00th=[ 87], 99.50th=[ 93], 99.90th=[ 126], 99.95th=[ 223], 00:12:23.148 | 99.99th=[ 725] 00:12:23.148 bw ( KiB/s): min= 7308, max= 7970, per=100.00%, avg=7595.09, stdev=219.26, samples=11 00:12:23.148 iops : min=14616, max=15940, avg=15190.18, stdev=438.52, samples=11 00:12:23.148 lat (nsec) : 1000=0.01% 00:12:23.148 lat (usec) : 20=0.01%, 50=3.30%, 100=96.43%, 250=0.22%, 500=0.03% 00:12:23.148 lat (usec) : 750=0.01%, 1000=0.01% 00:12:23.148 lat (msec) : 2=0.01%, 4=0.01% 00:12:23.148 cpu : usr=2.38%, sys=11.34%, ctx=87458, majf=0, minf=2 00:12:23.148 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.148 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.148 issued rwts: total=87451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.148 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.148 00:12:23.148 Run status group 0 (all jobs): 00:12:23.148 READ: bw=7581KiB/s (7763kB/s), 7581KiB/s-7581KiB/s (7763kB/s-7763kB/s), io=42.7MiB (44.8MB), run=5768-5768msec 00:12:23.148 00:12:23.148 Disk stats (read/write): 00:12:23.148 sda: ios=85771/0, merge=0/0, ticks=5140/0, in_queue=5140, util=98.40% 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@985 -- # rm -rf 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@76 -- # killprocess 66546 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@950 -- # '[' -z 66546 ']' 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # kill -0 66546 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@955 -- # uname 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:23.148 08:54:29 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66546 00:12:23.148 08:54:30 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:23.148 08:54:30 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:23.148 killing process with pid 66546 00:12:23.148 08:54:30 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66546' 00:12:23.148 08:54:30 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@969 -- # kill 66546 00:12:23.148 08:54:30 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@974 -- # wait 66546 00:12:26.435 08:54:32 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@77 -- # iscsitestfini 00:12:26.435 08:54:32 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:12:26.435 00:12:26.435 real 0m12.383s 00:12:26.435 user 0m10.005s 00:12:26.435 sys 0m2.114s 00:12:26.435 08:54:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.435 08:54:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:26.435 ************************************ 00:12:26.435 END TEST iscsi_tgt_reset 00:12:26.435 ************************************ 00:12:26.435 08:54:32 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@35 -- # run_test iscsi_tgt_rpc_config /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:12:26.435 08:54:32 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:26.435 08:54:32 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.435 08:54:32 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:26.435 ************************************ 00:12:26.435 START TEST iscsi_tgt_rpc_config 00:12:26.436 ************************************ 00:12:26.436 08:54:32 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:12:26.436 * Looking for test storage... 00:12:26.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@11 -- # iscsitestinit 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@16 -- # rpc_config_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@18 -- # timing_enter start_iscsi_tgt 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@21 -- # pid=66822 00:12:26.436 Process pid: 66822 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@22 -- # echo 'Process pid: 66822' 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@20 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@24 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@26 -- # waitforlisten 66822 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@831 -- # '[' -z 66822 ']' 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:26.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:26.436 08:54:33 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:12:26.436 [2024-07-25 08:54:33.256482] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:26.436 [2024-07-25 08:54:33.256627] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66822 ] 00:12:26.436 [2024-07-25 08:54:33.424000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.693 [2024-07-25 08:54:33.682299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.951 08:54:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:26.951 08:54:34 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@864 -- # return 0 00:12:26.951 08:54:34 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:12:26.951 08:54:34 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@28 -- # rpc_wait_pid=66838 00:12:26.951 08:54:34 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:12:27.211 08:54:34 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@32 -- # ps 66838 00:12:27.211 PID TTY STAT TIME COMMAND 00:12:27.211 66838 ? S 0:00 python3 /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:12:27.211 08:54:34 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:28.585 08:54:35 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@35 -- # sleep 1 00:12:29.520 iscsi_tgt is listening. Running tests... 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@36 -- # echo 'iscsi_tgt is listening. Running tests...' 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@39 -- # NOT ps 66838 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # local es=0 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@652 -- # valid_exec_arg ps 66838 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@638 -- # local arg=ps 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -t ps 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # type -P ps 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # arg=/usr/bin/ps 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/ps ]] 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@653 -- # ps 66838 00:12:29.520 PID TTY STAT TIME COMMAND 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@653 -- # es=1 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@43 -- # rpc_wait_pid=66868 00:12:29.520 08:54:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@44 -- # sleep 1 00:12:30.474 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@45 -- # NOT ps 66868 00:12:30.474 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # local es=0 00:12:30.474 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@652 -- # valid_exec_arg ps 66868 00:12:30.474 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@638 -- # local arg=ps 00:12:30.474 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:30.474 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -t ps 00:12:30.474 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:30.474 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # type -P ps 00:12:30.474 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:30.474 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # arg=/usr/bin/ps 00:12:30.474 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/ps ]] 00:12:30.474 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@653 -- # ps 66868 00:12:30.474 PID TTY STAT TIME COMMAND 00:12:30.474 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@653 -- # es=1 00:12:30.475 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:30.475 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:30.475 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:30.475 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@47 -- # timing_exit start_iscsi_tgt 00:12:30.475 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:30.475 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:12:30.732 08:54:37 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@49 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py /home/vagrant/spdk_repo/spdk/scripts/rpc.py 10.0.0.1 10.0.0.2 3260 10.0.0.2/32 spdk_iscsi_ns 00:12:57.283 [2024-07-25 08:55:01.682432] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.544 [2024-07-25 08:55:04.628151] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:59.445 verify_log_flag_rpc_methods passed 00:12:59.445 create_malloc_bdevs_rpc_methods passed 00:12:59.445 verify_portal_groups_rpc_methods passed 00:12:59.445 verify_initiator_groups_rpc_method passed. 00:12:59.445 This issue will be fixed later. 00:12:59.445 verify_target_nodes_rpc_methods passed. 00:12:59.445 verify_scsi_devices_rpc_methods passed 00:12:59.445 verify_iscsi_connection_rpc_methods passed 00:12:59.445 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:12:59.445 [ 00:12:59.445 { 00:12:59.445 "name": "Malloc0", 00:12:59.445 "aliases": [ 00:12:59.445 "950bc5d9-e6e2-4aaf-bf5a-5ff67b12ecee" 00:12:59.445 ], 00:12:59.445 "product_name": "Malloc disk", 00:12:59.445 "block_size": 512, 00:12:59.445 "num_blocks": 131072, 00:12:59.445 "uuid": "950bc5d9-e6e2-4aaf-bf5a-5ff67b12ecee", 00:12:59.445 "assigned_rate_limits": { 00:12:59.445 "rw_ios_per_sec": 0, 00:12:59.445 "rw_mbytes_per_sec": 0, 00:12:59.445 "r_mbytes_per_sec": 0, 00:12:59.445 "w_mbytes_per_sec": 0 00:12:59.445 }, 00:12:59.445 "claimed": false, 00:12:59.445 "zoned": false, 00:12:59.445 "supported_io_types": { 00:12:59.445 "read": true, 00:12:59.445 "write": true, 00:12:59.445 "unmap": true, 00:12:59.445 "flush": true, 00:12:59.445 "reset": true, 00:12:59.445 "nvme_admin": false, 00:12:59.445 "nvme_io": false, 00:12:59.445 "nvme_io_md": false, 00:12:59.445 "write_zeroes": true, 00:12:59.445 "zcopy": true, 00:12:59.445 "get_zone_info": false, 00:12:59.445 "zone_management": false, 00:12:59.445 "zone_append": false, 00:12:59.445 "compare": false, 00:12:59.445 "compare_and_write": false, 00:12:59.445 "abort": true, 00:12:59.445 "seek_hole": false, 00:12:59.445 "seek_data": false, 00:12:59.445 "copy": true, 00:12:59.445 "nvme_iov_md": false 00:12:59.445 }, 00:12:59.445 "memory_domains": [ 00:12:59.445 { 00:12:59.445 "dma_device_id": "system", 00:12:59.445 "dma_device_type": 1 00:12:59.445 }, 00:12:59.445 { 00:12:59.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.445 "dma_device_type": 2 00:12:59.445 } 00:12:59.445 ], 00:12:59.445 "driver_specific": {} 00:12:59.445 }, 00:12:59.445 { 00:12:59.445 "name": "Malloc1", 00:12:59.445 "aliases": [ 00:12:59.445 "7cd35251-e5d8-4d8e-a0db-58c065365b09" 00:12:59.445 ], 00:12:59.445 "product_name": "Malloc disk", 00:12:59.445 "block_size": 512, 00:12:59.445 "num_blocks": 131072, 00:12:59.445 "uuid": "7cd35251-e5d8-4d8e-a0db-58c065365b09", 00:12:59.445 "assigned_rate_limits": { 00:12:59.445 "rw_ios_per_sec": 0, 00:12:59.445 "rw_mbytes_per_sec": 0, 00:12:59.445 "r_mbytes_per_sec": 0, 00:12:59.445 "w_mbytes_per_sec": 0 00:12:59.445 }, 00:12:59.445 "claimed": false, 00:12:59.445 "zoned": false, 00:12:59.445 "supported_io_types": { 00:12:59.445 "read": true, 00:12:59.445 "write": true, 00:12:59.445 "unmap": true, 00:12:59.445 "flush": true, 00:12:59.445 "reset": true, 00:12:59.445 "nvme_admin": false, 00:12:59.445 "nvme_io": false, 00:12:59.445 "nvme_io_md": false, 00:12:59.445 "write_zeroes": true, 00:12:59.445 "zcopy": true, 00:12:59.445 "get_zone_info": false, 00:12:59.445 "zone_management": false, 00:12:59.445 "zone_append": false, 00:12:59.445 "compare": false, 00:12:59.445 "compare_and_write": false, 00:12:59.445 "abort": true, 00:12:59.445 "seek_hole": false, 00:12:59.445 "seek_data": false, 00:12:59.445 "copy": true, 00:12:59.445 "nvme_iov_md": false 00:12:59.445 }, 00:12:59.445 "memory_domains": [ 00:12:59.445 { 00:12:59.445 "dma_device_id": "system", 00:12:59.445 "dma_device_type": 1 00:12:59.445 }, 00:12:59.445 { 00:12:59.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.445 "dma_device_type": 2 00:12:59.445 } 00:12:59.445 ], 00:12:59.445 "driver_specific": {} 00:12:59.445 }, 00:12:59.445 { 00:12:59.445 "name": "Malloc2", 00:12:59.445 "aliases": [ 00:12:59.445 "ce84c502-3dca-480f-a533-16de3a16bb81" 00:12:59.445 ], 00:12:59.445 "product_name": "Malloc disk", 00:12:59.445 "block_size": 512, 00:12:59.445 "num_blocks": 131072, 00:12:59.445 "uuid": "ce84c502-3dca-480f-a533-16de3a16bb81", 00:12:59.445 "assigned_rate_limits": { 00:12:59.445 "rw_ios_per_sec": 0, 00:12:59.445 "rw_mbytes_per_sec": 0, 00:12:59.445 "r_mbytes_per_sec": 0, 00:12:59.445 "w_mbytes_per_sec": 0 00:12:59.445 }, 00:12:59.445 "claimed": false, 00:12:59.445 "zoned": false, 00:12:59.445 "supported_io_types": { 00:12:59.445 "read": true, 00:12:59.445 "write": true, 00:12:59.445 "unmap": true, 00:12:59.445 "flush": true, 00:12:59.445 "reset": true, 00:12:59.445 "nvme_admin": false, 00:12:59.445 "nvme_io": false, 00:12:59.445 "nvme_io_md": false, 00:12:59.445 "write_zeroes": true, 00:12:59.445 "zcopy": true, 00:12:59.445 "get_zone_info": false, 00:12:59.445 "zone_management": false, 00:12:59.445 "zone_append": false, 00:12:59.445 "compare": false, 00:12:59.445 "compare_and_write": false, 00:12:59.445 "abort": true, 00:12:59.445 "seek_hole": false, 00:12:59.445 "seek_data": false, 00:12:59.445 "copy": true, 00:12:59.445 "nvme_iov_md": false 00:12:59.445 }, 00:12:59.445 "memory_domains": [ 00:12:59.445 { 00:12:59.445 "dma_device_id": "system", 00:12:59.445 "dma_device_type": 1 00:12:59.445 }, 00:12:59.445 { 00:12:59.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.445 "dma_device_type": 2 00:12:59.445 } 00:12:59.445 ], 00:12:59.445 "driver_specific": {} 00:12:59.445 }, 00:12:59.445 { 00:12:59.445 "name": "Malloc3", 00:12:59.445 "aliases": [ 00:12:59.445 "593056c2-18ee-4ef6-88f8-210d8f1a2197" 00:12:59.445 ], 00:12:59.445 "product_name": "Malloc disk", 00:12:59.445 "block_size": 512, 00:12:59.445 "num_blocks": 131072, 00:12:59.445 "uuid": "593056c2-18ee-4ef6-88f8-210d8f1a2197", 00:12:59.445 "assigned_rate_limits": { 00:12:59.445 "rw_ios_per_sec": 0, 00:12:59.445 "rw_mbytes_per_sec": 0, 00:12:59.445 "r_mbytes_per_sec": 0, 00:12:59.445 "w_mbytes_per_sec": 0 00:12:59.445 }, 00:12:59.445 "claimed": false, 00:12:59.445 "zoned": false, 00:12:59.445 "supported_io_types": { 00:12:59.445 "read": true, 00:12:59.445 "write": true, 00:12:59.445 "unmap": true, 00:12:59.445 "flush": true, 00:12:59.445 "reset": true, 00:12:59.445 "nvme_admin": false, 00:12:59.445 "nvme_io": false, 00:12:59.445 "nvme_io_md": false, 00:12:59.445 "write_zeroes": true, 00:12:59.445 "zcopy": true, 00:12:59.445 "get_zone_info": false, 00:12:59.445 "zone_management": false, 00:12:59.445 "zone_append": false, 00:12:59.445 "compare": false, 00:12:59.445 "compare_and_write": false, 00:12:59.445 "abort": true, 00:12:59.445 "seek_hole": false, 00:12:59.445 "seek_data": false, 00:12:59.445 "copy": true, 00:12:59.445 "nvme_iov_md": false 00:12:59.445 }, 00:12:59.446 "memory_domains": [ 00:12:59.446 { 00:12:59.446 "dma_device_id": "system", 00:12:59.446 "dma_device_type": 1 00:12:59.446 }, 00:12:59.446 { 00:12:59.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.446 "dma_device_type": 2 00:12:59.446 } 00:12:59.446 ], 00:12:59.446 "driver_specific": {} 00:12:59.446 }, 00:12:59.446 { 00:12:59.446 "name": "Malloc4", 00:12:59.446 "aliases": [ 00:12:59.446 "a80bccbd-d485-4d89-9401-87fdf95bd8a5" 00:12:59.446 ], 00:12:59.446 "product_name": "Malloc disk", 00:12:59.446 "block_size": 512, 00:12:59.446 "num_blocks": 131072, 00:12:59.446 "uuid": "a80bccbd-d485-4d89-9401-87fdf95bd8a5", 00:12:59.446 "assigned_rate_limits": { 00:12:59.446 "rw_ios_per_sec": 0, 00:12:59.446 "rw_mbytes_per_sec": 0, 00:12:59.446 "r_mbytes_per_sec": 0, 00:12:59.446 "w_mbytes_per_sec": 0 00:12:59.446 }, 00:12:59.446 "claimed": false, 00:12:59.446 "zoned": false, 00:12:59.446 "supported_io_types": { 00:12:59.446 "read": true, 00:12:59.446 "write": true, 00:12:59.446 "unmap": true, 00:12:59.446 "flush": true, 00:12:59.446 "reset": true, 00:12:59.446 "nvme_admin": false, 00:12:59.446 "nvme_io": false, 00:12:59.446 "nvme_io_md": false, 00:12:59.446 "write_zeroes": true, 00:12:59.446 "zcopy": true, 00:12:59.446 "get_zone_info": false, 00:12:59.446 "zone_management": false, 00:12:59.446 "zone_append": false, 00:12:59.446 "compare": false, 00:12:59.446 "compare_and_write": false, 00:12:59.446 "abort": true, 00:12:59.446 "seek_hole": false, 00:12:59.446 "seek_data": false, 00:12:59.446 "copy": true, 00:12:59.446 "nvme_iov_md": false 00:12:59.446 }, 00:12:59.446 "memory_domains": [ 00:12:59.446 { 00:12:59.446 "dma_device_id": "system", 00:12:59.446 "dma_device_type": 1 00:12:59.446 }, 00:12:59.446 { 00:12:59.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.446 "dma_device_type": 2 00:12:59.446 } 00:12:59.446 ], 00:12:59.446 "driver_specific": {} 00:12:59.446 }, 00:12:59.446 { 00:12:59.446 "name": "Malloc5", 00:12:59.446 "aliases": [ 00:12:59.446 "0fa09df2-d3b7-4dca-a213-dfee68aed426" 00:12:59.446 ], 00:12:59.446 "product_name": "Malloc disk", 00:12:59.446 "block_size": 512, 00:12:59.446 "num_blocks": 131072, 00:12:59.446 "uuid": "0fa09df2-d3b7-4dca-a213-dfee68aed426", 00:12:59.446 "assigned_rate_limits": { 00:12:59.446 "rw_ios_per_sec": 0, 00:12:59.446 "rw_mbytes_per_sec": 0, 00:12:59.446 "r_mbytes_per_sec": 0, 00:12:59.446 "w_mbytes_per_sec": 0 00:12:59.446 }, 00:12:59.446 "claimed": false, 00:12:59.446 "zoned": false, 00:12:59.446 "supported_io_types": { 00:12:59.446 "read": true, 00:12:59.446 "write": true, 00:12:59.446 "unmap": true, 00:12:59.446 "flush": true, 00:12:59.446 "reset": true, 00:12:59.446 "nvme_admin": false, 00:12:59.446 "nvme_io": false, 00:12:59.446 "nvme_io_md": false, 00:12:59.446 "write_zeroes": true, 00:12:59.446 "zcopy": true, 00:12:59.446 "get_zone_info": false, 00:12:59.446 "zone_management": false, 00:12:59.446 "zone_append": false, 00:12:59.446 "compare": false, 00:12:59.446 "compare_and_write": false, 00:12:59.446 "abort": true, 00:12:59.446 "seek_hole": false, 00:12:59.446 "seek_data": false, 00:12:59.446 "copy": true, 00:12:59.446 "nvme_iov_md": false 00:12:59.446 }, 00:12:59.446 "memory_domains": [ 00:12:59.446 { 00:12:59.446 "dma_device_id": "system", 00:12:59.446 "dma_device_type": 1 00:12:59.446 }, 00:12:59.446 { 00:12:59.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.446 "dma_device_type": 2 00:12:59.446 } 00:12:59.446 ], 00:12:59.446 "driver_specific": {} 00:12:59.446 } 00:12:59.446 ] 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@53 -- # trap - SIGINT SIGTERM EXIT 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@55 -- # iscsicleanup 00:12:59.446 Cleaning up iSCSI connection 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:12:59.446 iscsiadm: No matching sessions found 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@983 -- # true 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:12:59.446 iscsiadm: No records found 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@984 -- # true 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@985 -- # rm -rf 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@56 -- # killprocess 66822 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@950 -- # '[' -z 66822 ']' 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # kill -0 66822 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@955 -- # uname 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66822 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:59.446 killing process with pid 66822 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66822' 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@969 -- # kill 66822 00:12:59.446 08:55:06 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@974 -- # wait 66822 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@58 -- # iscsitestfini 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:13:03.702 00:13:03.702 real 0m37.656s 00:13:03.702 user 1m1.417s 00:13:03.702 sys 0m4.348s 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.702 ************************************ 00:13:03.702 END TEST iscsi_tgt_rpc_config 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:13:03.702 ************************************ 00:13:03.702 08:55:10 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@36 -- # run_test iscsi_tgt_iscsi_lvol /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:13:03.702 08:55:10 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:03.702 08:55:10 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.702 08:55:10 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:13:03.702 ************************************ 00:13:03.702 START TEST iscsi_tgt_iscsi_lvol 00:13:03.702 ************************************ 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:13:03.702 * Looking for test storage... 00:13:03.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@11 -- # iscsitestinit 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@13 -- # MALLOC_BDEV_SIZE=128 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@15 -- # '[' 1 -eq 1 ']' 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@16 -- # NUM_LVS=10 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@17 -- # NUM_LVOL=10 00:13:03.702 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@23 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:03.703 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@24 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:13:03.703 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@26 -- # timing_enter start_iscsi_tgt 00:13:03.703 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:03.703 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:03.703 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@29 -- # pid=67467 00:13:03.703 Process pid: 67467 00:13:03.703 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@30 -- # echo 'Process pid: 67467' 00:13:03.703 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@32 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:13:03.703 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@34 -- # waitforlisten 67467 00:13:03.703 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@831 -- # '[' -z 67467 ']' 00:13:03.703 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@28 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:13:03.703 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.703 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:03.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.703 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.703 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:03.703 08:55:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:03.962 [2024-07-25 08:55:10.954553] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:03.962 [2024-07-25 08:55:10.954713] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67467 ] 00:13:04.220 [2024-07-25 08:55:11.125227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:04.478 [2024-07-25 08:55:11.416573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.478 [2024-07-25 08:55:11.416622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.478 [2024-07-25 08:55:11.416662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.478 [2024-07-25 08:55:11.416674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.736 08:55:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:04.736 08:55:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@864 -- # return 0 00:13:04.736 08:55:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:13:04.995 08:55:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:06.375 iscsi_tgt is listening. Running tests... 00:13:06.375 08:55:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@37 -- # echo 'iscsi_tgt is listening. Running tests...' 00:13:06.375 08:55:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@39 -- # timing_exit start_iscsi_tgt 00:13:06.375 08:55:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:06.375 08:55:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:06.375 08:55:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@41 -- # timing_enter setup 00:13:06.375 08:55:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:06.375 08:55:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:06.375 08:55:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:13:06.636 08:55:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # seq 1 10 00:13:06.636 08:55:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:06.636 08:55:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=3 00:13:06.636 08:55:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 3 ANY 10.0.0.2/32 00:13:06.895 08:55:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 1 -eq 1 ']' 00:13:06.895 08:55:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:07.155 08:55:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # malloc_bdevs='Malloc0 ' 00:13:07.155 08:55:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:07.721 08:55:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # malloc_bdevs+=Malloc1 00:13:07.721 08:55:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:07.979 08:55:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@53 -- # bdev=raid0 00:13:07.979 08:55:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs_1 -c 1048576 00:13:08.238 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=43791ec9-0da0-4aac-b910-4224dc5c7416 00:13:08.238 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:08.238 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:08.238 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:08.238 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 43791ec9-0da0-4aac-b910-4224dc5c7416 lbd_1 10 00:13:08.497 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0d491763-ea82-4412-9362-5f00af315958 00:13:08.497 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0d491763-ea82-4412-9362-5f00af315958:0 ' 00:13:08.498 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:08.498 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 43791ec9-0da0-4aac-b910-4224dc5c7416 lbd_2 10 00:13:08.756 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=75ea7efb-afdc-44c9-a621-61ec330f71f0 00:13:08.756 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='75ea7efb-afdc-44c9-a621-61ec330f71f0:1 ' 00:13:08.756 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:08.756 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 43791ec9-0da0-4aac-b910-4224dc5c7416 lbd_3 10 00:13:08.756 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=31683a0b-54ab-4eac-86e8-3a23ff755c38 00:13:08.756 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='31683a0b-54ab-4eac-86e8-3a23ff755c38:2 ' 00:13:08.756 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:08.756 08:55:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 43791ec9-0da0-4aac-b910-4224dc5c7416 lbd_4 10 00:13:09.014 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=dea0e9c7-f73c-4ea7-8100-2bd1b8a2cb7d 00:13:09.014 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='dea0e9c7-f73c-4ea7-8100-2bd1b8a2cb7d:3 ' 00:13:09.014 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:09.014 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 43791ec9-0da0-4aac-b910-4224dc5c7416 lbd_5 10 00:13:09.272 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8b35ead9-846f-43f3-b16d-16093ddb0a13 00:13:09.272 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8b35ead9-846f-43f3-b16d-16093ddb0a13:4 ' 00:13:09.272 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:09.272 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 43791ec9-0da0-4aac-b910-4224dc5c7416 lbd_6 10 00:13:09.531 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4792bfcb-f311-4858-b3e1-ea188ed52a78 00:13:09.531 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4792bfcb-f311-4858-b3e1-ea188ed52a78:5 ' 00:13:09.531 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:09.531 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 43791ec9-0da0-4aac-b910-4224dc5c7416 lbd_7 10 00:13:09.790 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ba809ff0-8f89-4fb7-af3c-db2cedf93086 00:13:09.790 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ba809ff0-8f89-4fb7-af3c-db2cedf93086:6 ' 00:13:09.790 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:09.790 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 43791ec9-0da0-4aac-b910-4224dc5c7416 lbd_8 10 00:13:10.048 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7bc9c2fa-35d7-4e81-88a0-e3301e93b273 00:13:10.049 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7bc9c2fa-35d7-4e81-88a0-e3301e93b273:7 ' 00:13:10.049 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:10.049 08:55:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 43791ec9-0da0-4aac-b910-4224dc5c7416 lbd_9 10 00:13:10.049 08:55:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=29a6cf35-5d70-4369-a0cc-32419728566f 00:13:10.049 08:55:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='29a6cf35-5d70-4369-a0cc-32419728566f:8 ' 00:13:10.049 08:55:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:10.049 08:55:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 43791ec9-0da0-4aac-b910-4224dc5c7416 lbd_10 10 00:13:10.308 08:55:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=97358977-4c9b-4728-9cf0-110d75a1de94 00:13:10.308 08:55:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='97358977-4c9b-4728-9cf0-110d75a1de94:9 ' 00:13:10.308 08:55:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias '0d491763-ea82-4412-9362-5f00af315958:0 75ea7efb-afdc-44c9-a621-61ec330f71f0:1 31683a0b-54ab-4eac-86e8-3a23ff755c38:2 dea0e9c7-f73c-4ea7-8100-2bd1b8a2cb7d:3 8b35ead9-846f-43f3-b16d-16093ddb0a13:4 4792bfcb-f311-4858-b3e1-ea188ed52a78:5 ba809ff0-8f89-4fb7-af3c-db2cedf93086:6 7bc9c2fa-35d7-4e81-88a0-e3301e93b273:7 29a6cf35-5d70-4369-a0cc-32419728566f:8 97358977-4c9b-4728-9cf0-110d75a1de94:9 ' 1:3 256 -d 00:13:10.567 08:55:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:10.567 08:55:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=4 00:13:10.567 08:55:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 4 ANY 10.0.0.2/32 00:13:10.825 08:55:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 2 -eq 1 ']' 00:13:10.825 08:55:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:11.393 08:55:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc2 00:13:11.393 08:55:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc2 lvs_2 -c 1048576 00:13:11.393 08:55:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=1b0b9305-0850-41ca-b98c-01e24ac7029a 00:13:11.393 08:55:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:11.393 08:55:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:11.393 08:55:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:11.393 08:55:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1b0b9305-0850-41ca-b98c-01e24ac7029a lbd_1 10 00:13:11.651 08:55:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=360af6d0-ab0f-4d10-b28d-c3e027a2e2d5 00:13:11.651 08:55:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='360af6d0-ab0f-4d10-b28d-c3e027a2e2d5:0 ' 00:13:11.651 08:55:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:11.651 08:55:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1b0b9305-0850-41ca-b98c-01e24ac7029a lbd_2 10 00:13:11.908 08:55:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=91948463-69a3-4e29-995e-8ec3a464efbf 00:13:11.908 08:55:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='91948463-69a3-4e29-995e-8ec3a464efbf:1 ' 00:13:11.908 08:55:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:11.908 08:55:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1b0b9305-0850-41ca-b98c-01e24ac7029a lbd_3 10 00:13:12.166 08:55:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0368b7da-b988-40bc-a772-d2725af8eb5e 00:13:12.166 08:55:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0368b7da-b988-40bc-a772-d2725af8eb5e:2 ' 00:13:12.166 08:55:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:12.166 08:55:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1b0b9305-0850-41ca-b98c-01e24ac7029a lbd_4 10 00:13:12.422 08:55:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=eac847c7-c020-44ab-8703-802314f93a46 00:13:12.422 08:55:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='eac847c7-c020-44ab-8703-802314f93a46:3 ' 00:13:12.422 08:55:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:12.422 08:55:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1b0b9305-0850-41ca-b98c-01e24ac7029a lbd_5 10 00:13:12.680 08:55:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8bcddd30-a815-4b47-9d0f-3eb47464e09d 00:13:12.680 08:55:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8bcddd30-a815-4b47-9d0f-3eb47464e09d:4 ' 00:13:12.680 08:55:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:12.680 08:55:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1b0b9305-0850-41ca-b98c-01e24ac7029a lbd_6 10 00:13:12.939 08:55:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2b38e4cb-0700-44d9-9bdb-efd156331621 00:13:12.939 08:55:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2b38e4cb-0700-44d9-9bdb-efd156331621:5 ' 00:13:12.939 08:55:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:12.939 08:55:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1b0b9305-0850-41ca-b98c-01e24ac7029a lbd_7 10 00:13:13.210 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=40046136-ea58-4dfa-8125-5f32a33e565d 00:13:13.210 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='40046136-ea58-4dfa-8125-5f32a33e565d:6 ' 00:13:13.210 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:13.210 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1b0b9305-0850-41ca-b98c-01e24ac7029a lbd_8 10 00:13:13.210 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5b0adbad-a7df-4ad3-8e5a-d40b50b5de88 00:13:13.210 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5b0adbad-a7df-4ad3-8e5a-d40b50b5de88:7 ' 00:13:13.210 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:13.210 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1b0b9305-0850-41ca-b98c-01e24ac7029a lbd_9 10 00:13:13.469 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0f7d2e96-9afd-4f6e-adbe-b4340b43409b 00:13:13.469 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0f7d2e96-9afd-4f6e-adbe-b4340b43409b:8 ' 00:13:13.469 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:13.469 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1b0b9305-0850-41ca-b98c-01e24ac7029a lbd_10 10 00:13:13.727 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1635e237-6476-488b-97c0-3a52febab39d 00:13:13.727 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1635e237-6476-488b-97c0-3a52febab39d:9 ' 00:13:13.727 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias '360af6d0-ab0f-4d10-b28d-c3e027a2e2d5:0 91948463-69a3-4e29-995e-8ec3a464efbf:1 0368b7da-b988-40bc-a772-d2725af8eb5e:2 eac847c7-c020-44ab-8703-802314f93a46:3 8bcddd30-a815-4b47-9d0f-3eb47464e09d:4 2b38e4cb-0700-44d9-9bdb-efd156331621:5 40046136-ea58-4dfa-8125-5f32a33e565d:6 5b0adbad-a7df-4ad3-8e5a-d40b50b5de88:7 0f7d2e96-9afd-4f6e-adbe-b4340b43409b:8 1635e237-6476-488b-97c0-3a52febab39d:9 ' 1:4 256 -d 00:13:13.986 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:13.986 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=5 00:13:13.986 08:55:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 5 ANY 10.0.0.2/32 00:13:14.244 08:55:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 3 -eq 1 ']' 00:13:14.244 08:55:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:14.504 08:55:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc3 00:13:14.504 08:55:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc3 lvs_3 -c 1048576 00:13:14.763 08:55:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=6ae10abf-54dd-4bda-99d7-dff509019e60 00:13:14.763 08:55:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:14.763 08:55:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:15.022 08:55:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:15.022 08:55:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6ae10abf-54dd-4bda-99d7-dff509019e60 lbd_1 10 00:13:15.280 08:55:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8cb650c3-44f5-4686-a4ce-d52a82c561b7 00:13:15.280 08:55:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8cb650c3-44f5-4686-a4ce-d52a82c561b7:0 ' 00:13:15.280 08:55:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:15.280 08:55:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6ae10abf-54dd-4bda-99d7-dff509019e60 lbd_2 10 00:13:15.280 08:55:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3dd4a048-2cbb-4e96-b511-55371b5f67d6 00:13:15.280 08:55:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3dd4a048-2cbb-4e96-b511-55371b5f67d6:1 ' 00:13:15.280 08:55:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:15.280 08:55:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6ae10abf-54dd-4bda-99d7-dff509019e60 lbd_3 10 00:13:15.539 08:55:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=24b8ff4b-28c9-4cb5-a3e9-a2c4a43e5e5e 00:13:15.539 08:55:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='24b8ff4b-28c9-4cb5-a3e9-a2c4a43e5e5e:2 ' 00:13:15.539 08:55:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:15.539 08:55:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6ae10abf-54dd-4bda-99d7-dff509019e60 lbd_4 10 00:13:15.799 08:55:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4afeaf81-900e-4e27-8a7e-3d42e71eb756 00:13:15.799 08:55:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4afeaf81-900e-4e27-8a7e-3d42e71eb756:3 ' 00:13:15.799 08:55:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:15.799 08:55:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6ae10abf-54dd-4bda-99d7-dff509019e60 lbd_5 10 00:13:16.058 08:55:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=dfd696fa-878c-4f4c-a53e-d3d5fef66738 00:13:16.058 08:55:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='dfd696fa-878c-4f4c-a53e-d3d5fef66738:4 ' 00:13:16.058 08:55:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:16.058 08:55:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6ae10abf-54dd-4bda-99d7-dff509019e60 lbd_6 10 00:13:16.319 08:55:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=cecea475-1b4b-44b4-9c3c-13e5293e2ae8 00:13:16.319 08:55:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='cecea475-1b4b-44b4-9c3c-13e5293e2ae8:5 ' 00:13:16.319 08:55:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:16.319 08:55:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6ae10abf-54dd-4bda-99d7-dff509019e60 lbd_7 10 00:13:16.579 08:55:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3d403a6d-ab7e-4b79-9c81-fcebc380f3c9 00:13:16.579 08:55:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3d403a6d-ab7e-4b79-9c81-fcebc380f3c9:6 ' 00:13:16.579 08:55:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:16.579 08:55:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6ae10abf-54dd-4bda-99d7-dff509019e60 lbd_8 10 00:13:16.836 08:55:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ca1bff96-5d3e-46c0-a2fb-025a777f080b 00:13:16.836 08:55:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ca1bff96-5d3e-46c0-a2fb-025a777f080b:7 ' 00:13:16.836 08:55:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:16.837 08:55:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6ae10abf-54dd-4bda-99d7-dff509019e60 lbd_9 10 00:13:17.096 08:55:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=645affe2-497c-4c6f-b408-2108af15e8b7 00:13:17.096 08:55:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='645affe2-497c-4c6f-b408-2108af15e8b7:8 ' 00:13:17.096 08:55:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:17.096 08:55:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6ae10abf-54dd-4bda-99d7-dff509019e60 lbd_10 10 00:13:17.356 08:55:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=6aaae133-cb14-4104-9868-0e78bf4cae7f 00:13:17.356 08:55:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='6aaae133-cb14-4104-9868-0e78bf4cae7f:9 ' 00:13:17.356 08:55:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias '8cb650c3-44f5-4686-a4ce-d52a82c561b7:0 3dd4a048-2cbb-4e96-b511-55371b5f67d6:1 24b8ff4b-28c9-4cb5-a3e9-a2c4a43e5e5e:2 4afeaf81-900e-4e27-8a7e-3d42e71eb756:3 dfd696fa-878c-4f4c-a53e-d3d5fef66738:4 cecea475-1b4b-44b4-9c3c-13e5293e2ae8:5 3d403a6d-ab7e-4b79-9c81-fcebc380f3c9:6 ca1bff96-5d3e-46c0-a2fb-025a777f080b:7 645affe2-497c-4c6f-b408-2108af15e8b7:8 6aaae133-cb14-4104-9868-0e78bf4cae7f:9 ' 1:5 256 -d 00:13:17.615 08:55:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:17.615 08:55:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=6 00:13:17.615 08:55:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 6 ANY 10.0.0.2/32 00:13:17.873 08:55:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 4 -eq 1 ']' 00:13:17.873 08:55:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:18.438 08:55:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc4 00:13:18.438 08:55:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc4 lvs_4 -c 1048576 00:13:18.695 08:55:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=c0a2d76c-a1dc-4ba0-8630-aef937c1dbba 00:13:18.695 08:55:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:18.695 08:55:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:18.695 08:55:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:18.695 08:55:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0a2d76c-a1dc-4ba0-8630-aef937c1dbba lbd_1 10 00:13:18.952 08:55:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ba77d8f5-0d9e-4618-9dde-0ef8745722fb 00:13:18.952 08:55:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ba77d8f5-0d9e-4618-9dde-0ef8745722fb:0 ' 00:13:18.952 08:55:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:18.952 08:55:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0a2d76c-a1dc-4ba0-8630-aef937c1dbba lbd_2 10 00:13:18.952 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=935791ae-3fc5-4b14-a504-cace1d877cf4 00:13:18.952 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='935791ae-3fc5-4b14-a504-cace1d877cf4:1 ' 00:13:18.952 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:18.952 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0a2d76c-a1dc-4ba0-8630-aef937c1dbba lbd_3 10 00:13:19.209 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a9497d5d-3397-444e-a418-969b914ee763 00:13:19.209 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a9497d5d-3397-444e-a418-969b914ee763:2 ' 00:13:19.209 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:19.209 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0a2d76c-a1dc-4ba0-8630-aef937c1dbba lbd_4 10 00:13:19.467 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9c923e5e-07e4-4400-9c59-1ea5eaa359cc 00:13:19.467 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9c923e5e-07e4-4400-9c59-1ea5eaa359cc:3 ' 00:13:19.467 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:19.467 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0a2d76c-a1dc-4ba0-8630-aef937c1dbba lbd_5 10 00:13:19.725 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=90b107aa-ce6b-4a56-a2c0-63f0befba38e 00:13:19.725 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='90b107aa-ce6b-4a56-a2c0-63f0befba38e:4 ' 00:13:19.725 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:19.725 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0a2d76c-a1dc-4ba0-8630-aef937c1dbba lbd_6 10 00:13:19.983 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ea6311be-751d-421a-aa08-97ee1440035a 00:13:19.983 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ea6311be-751d-421a-aa08-97ee1440035a:5 ' 00:13:19.983 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:19.983 08:55:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0a2d76c-a1dc-4ba0-8630-aef937c1dbba lbd_7 10 00:13:20.240 08:55:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a5dedf02-1da4-41fe-9f4d-af8e0b995351 00:13:20.240 08:55:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a5dedf02-1da4-41fe-9f4d-af8e0b995351:6 ' 00:13:20.240 08:55:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:20.240 08:55:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0a2d76c-a1dc-4ba0-8630-aef937c1dbba lbd_8 10 00:13:20.498 08:55:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9a441869-b154-48ce-956f-8ac76d9a5fbc 00:13:20.498 08:55:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9a441869-b154-48ce-956f-8ac76d9a5fbc:7 ' 00:13:20.498 08:55:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:20.498 08:55:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0a2d76c-a1dc-4ba0-8630-aef937c1dbba lbd_9 10 00:13:20.498 08:55:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a97332ff-34c2-42e4-8306-72e260b64d01 00:13:20.498 08:55:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a97332ff-34c2-42e4-8306-72e260b64d01:8 ' 00:13:20.498 08:55:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:20.498 08:55:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0a2d76c-a1dc-4ba0-8630-aef937c1dbba lbd_10 10 00:13:20.758 08:55:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=489f1032-f578-492a-995d-8db1d4148618 00:13:20.758 08:55:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='489f1032-f578-492a-995d-8db1d4148618:9 ' 00:13:20.758 08:55:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias 'ba77d8f5-0d9e-4618-9dde-0ef8745722fb:0 935791ae-3fc5-4b14-a504-cace1d877cf4:1 a9497d5d-3397-444e-a418-969b914ee763:2 9c923e5e-07e4-4400-9c59-1ea5eaa359cc:3 90b107aa-ce6b-4a56-a2c0-63f0befba38e:4 ea6311be-751d-421a-aa08-97ee1440035a:5 a5dedf02-1da4-41fe-9f4d-af8e0b995351:6 9a441869-b154-48ce-956f-8ac76d9a5fbc:7 a97332ff-34c2-42e4-8306-72e260b64d01:8 489f1032-f578-492a-995d-8db1d4148618:9 ' 1:6 256 -d 00:13:21.017 08:55:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:21.017 08:55:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=7 00:13:21.017 08:55:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 7 ANY 10.0.0.2/32 00:13:21.276 08:55:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 5 -eq 1 ']' 00:13:21.276 08:55:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:21.841 08:55:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc5 00:13:21.841 08:55:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc5 lvs_5 -c 1048576 00:13:21.841 08:55:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=dd0d85b3-b3fd-4c80-a7e0-5c2260197162 00:13:21.841 08:55:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:21.841 08:55:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:21.841 08:55:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:21.841 08:55:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dd0d85b3-b3fd-4c80-a7e0-5c2260197162 lbd_1 10 00:13:22.099 08:55:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e18b854e-4338-44e5-8420-0403119e74e5 00:13:22.099 08:55:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e18b854e-4338-44e5-8420-0403119e74e5:0 ' 00:13:22.099 08:55:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:22.099 08:55:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dd0d85b3-b3fd-4c80-a7e0-5c2260197162 lbd_2 10 00:13:22.356 08:55:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9e0fa489-150b-4335-82cb-56fa17d39a16 00:13:22.356 08:55:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9e0fa489-150b-4335-82cb-56fa17d39a16:1 ' 00:13:22.356 08:55:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:22.356 08:55:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dd0d85b3-b3fd-4c80-a7e0-5c2260197162 lbd_3 10 00:13:22.614 08:55:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0a64da40-ef5b-4744-90fd-121cd9c8707f 00:13:22.614 08:55:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0a64da40-ef5b-4744-90fd-121cd9c8707f:2 ' 00:13:22.614 08:55:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:22.614 08:55:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dd0d85b3-b3fd-4c80-a7e0-5c2260197162 lbd_4 10 00:13:22.871 08:55:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4f55c8b5-db0e-410b-a280-75ae74bcb3cd 00:13:22.871 08:55:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4f55c8b5-db0e-410b-a280-75ae74bcb3cd:3 ' 00:13:22.871 08:55:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:22.871 08:55:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dd0d85b3-b3fd-4c80-a7e0-5c2260197162 lbd_5 10 00:13:23.129 08:55:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ebe6e28b-73c8-40e0-9b11-ce9593338f71 00:13:23.129 08:55:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ebe6e28b-73c8-40e0-9b11-ce9593338f71:4 ' 00:13:23.129 08:55:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:23.129 08:55:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dd0d85b3-b3fd-4c80-a7e0-5c2260197162 lbd_6 10 00:13:23.387 08:55:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e3867bda-320a-48cb-a4a8-0a3af4ef6dfe 00:13:23.387 08:55:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e3867bda-320a-48cb-a4a8-0a3af4ef6dfe:5 ' 00:13:23.387 08:55:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:23.387 08:55:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dd0d85b3-b3fd-4c80-a7e0-5c2260197162 lbd_7 10 00:13:23.646 08:55:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d8035e56-74ce-4b98-9ee3-71b44b1e2ca9 00:13:23.646 08:55:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d8035e56-74ce-4b98-9ee3-71b44b1e2ca9:6 ' 00:13:23.646 08:55:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:23.646 08:55:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dd0d85b3-b3fd-4c80-a7e0-5c2260197162 lbd_8 10 00:13:23.904 08:55:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3fcbfe67-035a-4ca7-a3b8-416690d25a98 00:13:23.904 08:55:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3fcbfe67-035a-4ca7-a3b8-416690d25a98:7 ' 00:13:23.904 08:55:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:23.904 08:55:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dd0d85b3-b3fd-4c80-a7e0-5c2260197162 lbd_9 10 00:13:24.162 08:55:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b911c907-1860-47c2-acc8-dd1330cacab3 00:13:24.162 08:55:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b911c907-1860-47c2-acc8-dd1330cacab3:8 ' 00:13:24.162 08:55:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:24.162 08:55:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dd0d85b3-b3fd-4c80-a7e0-5c2260197162 lbd_10 10 00:13:24.419 08:55:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=26c75a54-ecbf-455a-9ce3-6a7f4384fb93 00:13:24.419 08:55:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='26c75a54-ecbf-455a-9ce3-6a7f4384fb93:9 ' 00:13:24.419 08:55:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias 'e18b854e-4338-44e5-8420-0403119e74e5:0 9e0fa489-150b-4335-82cb-56fa17d39a16:1 0a64da40-ef5b-4744-90fd-121cd9c8707f:2 4f55c8b5-db0e-410b-a280-75ae74bcb3cd:3 ebe6e28b-73c8-40e0-9b11-ce9593338f71:4 e3867bda-320a-48cb-a4a8-0a3af4ef6dfe:5 d8035e56-74ce-4b98-9ee3-71b44b1e2ca9:6 3fcbfe67-035a-4ca7-a3b8-416690d25a98:7 b911c907-1860-47c2-acc8-dd1330cacab3:8 26c75a54-ecbf-455a-9ce3-6a7f4384fb93:9 ' 1:7 256 -d 00:13:24.419 08:55:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:24.419 08:55:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=8 00:13:24.419 08:55:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 8 ANY 10.0.0.2/32 00:13:24.677 08:55:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 6 -eq 1 ']' 00:13:24.677 08:55:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:25.243 08:55:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc6 00:13:25.243 08:55:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc6 lvs_6 -c 1048576 00:13:25.501 08:55:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=9f7e565a-8609-46fe-9842-2ee681b01aed 00:13:25.501 08:55:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:25.501 08:55:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:25.501 08:55:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:25.501 08:55:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9f7e565a-8609-46fe-9842-2ee681b01aed lbd_1 10 00:13:26.069 08:55:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=13197b1b-43fa-43ba-8396-7cdbbdf1b345 00:13:26.069 08:55:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='13197b1b-43fa-43ba-8396-7cdbbdf1b345:0 ' 00:13:26.069 08:55:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:26.069 08:55:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9f7e565a-8609-46fe-9842-2ee681b01aed lbd_2 10 00:13:26.069 08:55:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=bdbf1acb-68e8-471b-91c2-b8a991b719a8 00:13:26.069 08:55:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='bdbf1acb-68e8-471b-91c2-b8a991b719a8:1 ' 00:13:26.070 08:55:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:26.070 08:55:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9f7e565a-8609-46fe-9842-2ee681b01aed lbd_3 10 00:13:26.327 08:55:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=941afc9c-868f-447b-a0cc-5359b967da75 00:13:26.327 08:55:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='941afc9c-868f-447b-a0cc-5359b967da75:2 ' 00:13:26.327 08:55:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:26.328 08:55:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9f7e565a-8609-46fe-9842-2ee681b01aed lbd_4 10 00:13:26.584 08:55:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f281d35e-13ac-4ccd-a722-da8d11d0f7cc 00:13:26.584 08:55:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f281d35e-13ac-4ccd-a722-da8d11d0f7cc:3 ' 00:13:26.584 08:55:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:26.584 08:55:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9f7e565a-8609-46fe-9842-2ee681b01aed lbd_5 10 00:13:26.841 08:55:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=950461b2-4d81-4fde-bcb0-a56889ced6a7 00:13:26.841 08:55:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='950461b2-4d81-4fde-bcb0-a56889ced6a7:4 ' 00:13:26.841 08:55:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:26.841 08:55:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9f7e565a-8609-46fe-9842-2ee681b01aed lbd_6 10 00:13:27.100 08:55:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=63d29830-cb04-4172-a213-5fa2bf2788d7 00:13:27.100 08:55:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='63d29830-cb04-4172-a213-5fa2bf2788d7:5 ' 00:13:27.100 08:55:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:27.100 08:55:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9f7e565a-8609-46fe-9842-2ee681b01aed lbd_7 10 00:13:27.359 08:55:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3fc61e7f-b4be-4f7d-b15d-b529a2672ec8 00:13:27.359 08:55:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3fc61e7f-b4be-4f7d-b15d-b529a2672ec8:6 ' 00:13:27.359 08:55:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:27.359 08:55:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9f7e565a-8609-46fe-9842-2ee681b01aed lbd_8 10 00:13:27.617 08:55:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=73abd572-6256-49ff-9d47-c93888882824 00:13:27.617 08:55:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='73abd572-6256-49ff-9d47-c93888882824:7 ' 00:13:27.617 08:55:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:27.617 08:55:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9f7e565a-8609-46fe-9842-2ee681b01aed lbd_9 10 00:13:27.879 08:55:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7f29cb57-08c2-455d-a501-809ca7f1d746 00:13:27.879 08:55:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7f29cb57-08c2-455d-a501-809ca7f1d746:8 ' 00:13:27.879 08:55:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:27.879 08:55:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9f7e565a-8609-46fe-9842-2ee681b01aed lbd_10 10 00:13:28.143 08:55:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d01c37a6-a5b3-4668-9f80-d7a46de40216 00:13:28.143 08:55:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d01c37a6-a5b3-4668-9f80-d7a46de40216:9 ' 00:13:28.143 08:55:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias '13197b1b-43fa-43ba-8396-7cdbbdf1b345:0 bdbf1acb-68e8-471b-91c2-b8a991b719a8:1 941afc9c-868f-447b-a0cc-5359b967da75:2 f281d35e-13ac-4ccd-a722-da8d11d0f7cc:3 950461b2-4d81-4fde-bcb0-a56889ced6a7:4 63d29830-cb04-4172-a213-5fa2bf2788d7:5 3fc61e7f-b4be-4f7d-b15d-b529a2672ec8:6 73abd572-6256-49ff-9d47-c93888882824:7 7f29cb57-08c2-455d-a501-809ca7f1d746:8 d01c37a6-a5b3-4668-9f80-d7a46de40216:9 ' 1:8 256 -d 00:13:28.143 08:55:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:28.143 08:55:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=9 00:13:28.143 08:55:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 9 ANY 10.0.0.2/32 00:13:28.401 08:55:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 7 -eq 1 ']' 00:13:28.401 08:55:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:28.968 08:55:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc7 00:13:28.968 08:55:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc7 lvs_7 -c 1048576 00:13:29.226 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=ef0b8c08-4c51-4168-aeab-d4d03990cd98 00:13:29.226 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:29.226 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:29.226 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:29.226 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ef0b8c08-4c51-4168-aeab-d4d03990cd98 lbd_1 10 00:13:29.484 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=283285ba-9d39-43e5-8ac6-fdb2dba90ed6 00:13:29.484 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='283285ba-9d39-43e5-8ac6-fdb2dba90ed6:0 ' 00:13:29.484 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:29.484 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ef0b8c08-4c51-4168-aeab-d4d03990cd98 lbd_2 10 00:13:29.743 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f542d685-f45b-4394-8ea0-fe5a3045d231 00:13:29.743 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f542d685-f45b-4394-8ea0-fe5a3045d231:1 ' 00:13:29.743 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:29.743 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ef0b8c08-4c51-4168-aeab-d4d03990cd98 lbd_3 10 00:13:30.003 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=766b82ea-bfe3-41b4-b0a0-624f068453f9 00:13:30.003 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='766b82ea-bfe3-41b4-b0a0-624f068453f9:2 ' 00:13:30.003 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:30.003 08:55:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ef0b8c08-4c51-4168-aeab-d4d03990cd98 lbd_4 10 00:13:30.261 08:55:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=55374695-e023-4898-9d21-06a332f192ac 00:13:30.261 08:55:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='55374695-e023-4898-9d21-06a332f192ac:3 ' 00:13:30.261 08:55:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:30.261 08:55:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ef0b8c08-4c51-4168-aeab-d4d03990cd98 lbd_5 10 00:13:30.520 08:55:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=634e850e-161b-4aa4-af18-e756d7df0f19 00:13:30.520 08:55:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='634e850e-161b-4aa4-af18-e756d7df0f19:4 ' 00:13:30.520 08:55:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:30.520 08:55:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ef0b8c08-4c51-4168-aeab-d4d03990cd98 lbd_6 10 00:13:30.779 08:55:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=befe9de6-4072-4490-9558-0cc4c4301bde 00:13:30.779 08:55:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='befe9de6-4072-4490-9558-0cc4c4301bde:5 ' 00:13:30.779 08:55:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:30.779 08:55:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ef0b8c08-4c51-4168-aeab-d4d03990cd98 lbd_7 10 00:13:31.037 08:55:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9d17336f-e619-4e87-8081-f7fe5bdc66af 00:13:31.037 08:55:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9d17336f-e619-4e87-8081-f7fe5bdc66af:6 ' 00:13:31.037 08:55:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:31.037 08:55:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ef0b8c08-4c51-4168-aeab-d4d03990cd98 lbd_8 10 00:13:31.037 08:55:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2b4a6340-907f-45d4-aaf1-b4121de0a3c9 00:13:31.037 08:55:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2b4a6340-907f-45d4-aaf1-b4121de0a3c9:7 ' 00:13:31.037 08:55:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:31.037 08:55:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ef0b8c08-4c51-4168-aeab-d4d03990cd98 lbd_9 10 00:13:31.295 08:55:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=186d6d42-ff70-49c0-832c-93bbe3028957 00:13:31.295 08:55:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='186d6d42-ff70-49c0-832c-93bbe3028957:8 ' 00:13:31.295 08:55:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:31.295 08:55:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ef0b8c08-4c51-4168-aeab-d4d03990cd98 lbd_10 10 00:13:31.554 08:55:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f970bf33-aaaa-4f82-af66-d7c77121d1d7 00:13:31.554 08:55:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f970bf33-aaaa-4f82-af66-d7c77121d1d7:9 ' 00:13:31.555 08:55:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias '283285ba-9d39-43e5-8ac6-fdb2dba90ed6:0 f542d685-f45b-4394-8ea0-fe5a3045d231:1 766b82ea-bfe3-41b4-b0a0-624f068453f9:2 55374695-e023-4898-9d21-06a332f192ac:3 634e850e-161b-4aa4-af18-e756d7df0f19:4 befe9de6-4072-4490-9558-0cc4c4301bde:5 9d17336f-e619-4e87-8081-f7fe5bdc66af:6 2b4a6340-907f-45d4-aaf1-b4121de0a3c9:7 186d6d42-ff70-49c0-832c-93bbe3028957:8 f970bf33-aaaa-4f82-af66-d7c77121d1d7:9 ' 1:9 256 -d 00:13:31.813 08:55:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:31.813 08:55:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=10 00:13:31.813 08:55:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 10 ANY 10.0.0.2/32 00:13:32.072 08:55:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 8 -eq 1 ']' 00:13:32.072 08:55:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:32.639 08:55:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc8 00:13:32.639 08:55:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc8 lvs_8 -c 1048576 00:13:32.639 08:55:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=67375493-3bcd-46c1-a774-dfa53ac11c1e 00:13:32.639 08:55:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:32.639 08:55:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:32.639 08:55:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:32.896 08:55:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 67375493-3bcd-46c1-a774-dfa53ac11c1e lbd_1 10 00:13:33.154 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d15fec67-96bf-4fe2-8629-dd162a516fae 00:13:33.154 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d15fec67-96bf-4fe2-8629-dd162a516fae:0 ' 00:13:33.154 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:33.154 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 67375493-3bcd-46c1-a774-dfa53ac11c1e lbd_2 10 00:13:33.154 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9de33722-1294-4693-a71c-7b758e5fb49d 00:13:33.154 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9de33722-1294-4693-a71c-7b758e5fb49d:1 ' 00:13:33.154 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:33.154 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 67375493-3bcd-46c1-a774-dfa53ac11c1e lbd_3 10 00:13:33.412 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a0c9e1ca-f535-45dd-b89d-daf72fc8f10f 00:13:33.412 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a0c9e1ca-f535-45dd-b89d-daf72fc8f10f:2 ' 00:13:33.412 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:33.412 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 67375493-3bcd-46c1-a774-dfa53ac11c1e lbd_4 10 00:13:33.669 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d43b752d-8a64-46e3-9df0-fcc482a8b9cd 00:13:33.669 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d43b752d-8a64-46e3-9df0-fcc482a8b9cd:3 ' 00:13:33.669 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:33.669 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 67375493-3bcd-46c1-a774-dfa53ac11c1e lbd_5 10 00:13:33.927 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b1883175-3342-4eb9-8bff-369527213da9 00:13:33.927 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b1883175-3342-4eb9-8bff-369527213da9:4 ' 00:13:33.927 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:33.927 08:55:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 67375493-3bcd-46c1-a774-dfa53ac11c1e lbd_6 10 00:13:33.927 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=93771f8b-abc2-4c9f-9a40-97d73985413c 00:13:33.927 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='93771f8b-abc2-4c9f-9a40-97d73985413c:5 ' 00:13:33.927 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:33.927 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 67375493-3bcd-46c1-a774-dfa53ac11c1e lbd_7 10 00:13:34.186 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=53fc2425-62ea-49c8-a164-98e7ccaf3c55 00:13:34.186 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='53fc2425-62ea-49c8-a164-98e7ccaf3c55:6 ' 00:13:34.186 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:34.186 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 67375493-3bcd-46c1-a774-dfa53ac11c1e lbd_8 10 00:13:34.445 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=81dd62a6-b712-4501-a15b-0655efb6479b 00:13:34.445 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='81dd62a6-b712-4501-a15b-0655efb6479b:7 ' 00:13:34.445 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:34.445 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 67375493-3bcd-46c1-a774-dfa53ac11c1e lbd_9 10 00:13:34.704 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1b639ce4-df60-4c50-8001-ec19de23592d 00:13:34.704 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1b639ce4-df60-4c50-8001-ec19de23592d:8 ' 00:13:34.704 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:34.704 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 67375493-3bcd-46c1-a774-dfa53ac11c1e lbd_10 10 00:13:34.962 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=92f121f5-64ed-4a17-a5fa-57dbcea0bfc9 00:13:34.962 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='92f121f5-64ed-4a17-a5fa-57dbcea0bfc9:9 ' 00:13:34.962 08:55:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias 'd15fec67-96bf-4fe2-8629-dd162a516fae:0 9de33722-1294-4693-a71c-7b758e5fb49d:1 a0c9e1ca-f535-45dd-b89d-daf72fc8f10f:2 d43b752d-8a64-46e3-9df0-fcc482a8b9cd:3 b1883175-3342-4eb9-8bff-369527213da9:4 93771f8b-abc2-4c9f-9a40-97d73985413c:5 53fc2425-62ea-49c8-a164-98e7ccaf3c55:6 81dd62a6-b712-4501-a15b-0655efb6479b:7 1b639ce4-df60-4c50-8001-ec19de23592d:8 92f121f5-64ed-4a17-a5fa-57dbcea0bfc9:9 ' 1:10 256 -d 00:13:35.220 08:55:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:35.220 08:55:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=11 00:13:35.220 08:55:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 11 ANY 10.0.0.2/32 00:13:35.479 08:55:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 9 -eq 1 ']' 00:13:35.479 08:55:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:35.737 08:55:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc9 00:13:35.737 08:55:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc9 lvs_9 -c 1048576 00:13:35.996 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=f3de9cad-7d75-4cc2-8ffd-26c84f8dc177 00:13:35.996 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:35.996 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:35.996 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:35.996 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3de9cad-7d75-4cc2-8ffd-26c84f8dc177 lbd_1 10 00:13:36.255 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=355fa940-4093-491c-b16f-8b2cc8087772 00:13:36.255 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='355fa940-4093-491c-b16f-8b2cc8087772:0 ' 00:13:36.255 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:36.255 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3de9cad-7d75-4cc2-8ffd-26c84f8dc177 lbd_2 10 00:13:36.514 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b9c2fc94-5fdf-4a47-a053-8158849e8c98 00:13:36.514 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b9c2fc94-5fdf-4a47-a053-8158849e8c98:1 ' 00:13:36.514 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:36.514 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3de9cad-7d75-4cc2-8ffd-26c84f8dc177 lbd_3 10 00:13:36.773 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4c316b4e-11ed-4e06-a13e-41e729adb37b 00:13:36.773 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4c316b4e-11ed-4e06-a13e-41e729adb37b:2 ' 00:13:36.773 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:36.773 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3de9cad-7d75-4cc2-8ffd-26c84f8dc177 lbd_4 10 00:13:37.032 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f3a97fd4-dd7b-4666-8843-e677e45d5355 00:13:37.032 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f3a97fd4-dd7b-4666-8843-e677e45d5355:3 ' 00:13:37.032 08:55:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:37.032 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3de9cad-7d75-4cc2-8ffd-26c84f8dc177 lbd_5 10 00:13:37.291 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9de2e34e-6c32-4779-9153-44ed1de76cec 00:13:37.291 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9de2e34e-6c32-4779-9153-44ed1de76cec:4 ' 00:13:37.291 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:37.291 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3de9cad-7d75-4cc2-8ffd-26c84f8dc177 lbd_6 10 00:13:37.551 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2cadcbc2-385d-4e91-8ae4-3b2eabc4f81b 00:13:37.551 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2cadcbc2-385d-4e91-8ae4-3b2eabc4f81b:5 ' 00:13:37.551 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:37.551 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3de9cad-7d75-4cc2-8ffd-26c84f8dc177 lbd_7 10 00:13:37.810 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=952dc07c-7a9e-4d77-a869-544450420caa 00:13:37.810 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='952dc07c-7a9e-4d77-a869-544450420caa:6 ' 00:13:37.810 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:37.810 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3de9cad-7d75-4cc2-8ffd-26c84f8dc177 lbd_8 10 00:13:38.070 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c81ba872-df5a-41a7-8a5a-9e1b38c47b46 00:13:38.070 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c81ba872-df5a-41a7-8a5a-9e1b38c47b46:7 ' 00:13:38.070 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:38.070 08:55:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3de9cad-7d75-4cc2-8ffd-26c84f8dc177 lbd_9 10 00:13:38.328 08:55:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b693fc01-e660-4bcd-afb1-bf9bd6e522e5 00:13:38.328 08:55:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b693fc01-e660-4bcd-afb1-bf9bd6e522e5:8 ' 00:13:38.328 08:55:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:38.328 08:55:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3de9cad-7d75-4cc2-8ffd-26c84f8dc177 lbd_10 10 00:13:38.328 08:55:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=94ff0318-7f81-43f3-aab4-a9d475bc34db 00:13:38.328 08:55:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='94ff0318-7f81-43f3-aab4-a9d475bc34db:9 ' 00:13:38.328 08:55:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias '355fa940-4093-491c-b16f-8b2cc8087772:0 b9c2fc94-5fdf-4a47-a053-8158849e8c98:1 4c316b4e-11ed-4e06-a13e-41e729adb37b:2 f3a97fd4-dd7b-4666-8843-e677e45d5355:3 9de2e34e-6c32-4779-9153-44ed1de76cec:4 2cadcbc2-385d-4e91-8ae4-3b2eabc4f81b:5 952dc07c-7a9e-4d77-a869-544450420caa:6 c81ba872-df5a-41a7-8a5a-9e1b38c47b46:7 b693fc01-e660-4bcd-afb1-bf9bd6e522e5:8 94ff0318-7f81-43f3-aab4-a9d475bc34db:9 ' 1:11 256 -d 00:13:38.587 08:55:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:38.587 08:55:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=12 00:13:38.587 08:55:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 12 ANY 10.0.0.2/32 00:13:38.846 08:55:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 10 -eq 1 ']' 00:13:38.846 08:55:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:39.413 08:55:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc10 00:13:39.413 08:55:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc10 lvs_10 -c 1048576 00:13:39.672 08:55:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=405b01a2-4dc0-4579-bf94-77ba7b41c8d7 00:13:39.672 08:55:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:39.672 08:55:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:39.672 08:55:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:39.672 08:55:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 405b01a2-4dc0-4579-bf94-77ba7b41c8d7 lbd_1 10 00:13:39.931 08:55:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=89ba734b-f467-4ef9-baef-ed3b407ad146 00:13:39.931 08:55:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='89ba734b-f467-4ef9-baef-ed3b407ad146:0 ' 00:13:39.931 08:55:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:39.931 08:55:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 405b01a2-4dc0-4579-bf94-77ba7b41c8d7 lbd_2 10 00:13:40.190 08:55:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=582c7f1e-7070-4a59-8f98-1f23100d8fe6 00:13:40.190 08:55:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='582c7f1e-7070-4a59-8f98-1f23100d8fe6:1 ' 00:13:40.190 08:55:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:40.190 08:55:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 405b01a2-4dc0-4579-bf94-77ba7b41c8d7 lbd_3 10 00:13:40.448 08:55:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=315f875f-aa93-4da2-a248-f78b6e435b65 00:13:40.448 08:55:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='315f875f-aa93-4da2-a248-f78b6e435b65:2 ' 00:13:40.448 08:55:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:40.448 08:55:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 405b01a2-4dc0-4579-bf94-77ba7b41c8d7 lbd_4 10 00:13:40.708 08:55:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=13c6368d-a868-4c4c-976d-8dafa69790af 00:13:40.708 08:55:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='13c6368d-a868-4c4c-976d-8dafa69790af:3 ' 00:13:40.708 08:55:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:40.708 08:55:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 405b01a2-4dc0-4579-bf94-77ba7b41c8d7 lbd_5 10 00:13:40.966 08:55:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=13c291db-680f-4713-ad39-96c841f6d9b9 00:13:40.966 08:55:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='13c291db-680f-4713-ad39-96c841f6d9b9:4 ' 00:13:40.966 08:55:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:40.967 08:55:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 405b01a2-4dc0-4579-bf94-77ba7b41c8d7 lbd_6 10 00:13:40.967 08:55:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=52689e67-4f5f-41a8-8550-f4b05edb6a3b 00:13:40.967 08:55:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='52689e67-4f5f-41a8-8550-f4b05edb6a3b:5 ' 00:13:40.967 08:55:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:40.967 08:55:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 405b01a2-4dc0-4579-bf94-77ba7b41c8d7 lbd_7 10 00:13:41.227 08:55:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b1795985-da72-4b45-8196-d2b612f42bed 00:13:41.227 08:55:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b1795985-da72-4b45-8196-d2b612f42bed:6 ' 00:13:41.227 08:55:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:41.227 08:55:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 405b01a2-4dc0-4579-bf94-77ba7b41c8d7 lbd_8 10 00:13:41.487 08:55:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1b94945f-bb69-4674-87d6-a75f380444ba 00:13:41.487 08:55:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1b94945f-bb69-4674-87d6-a75f380444ba:7 ' 00:13:41.487 08:55:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:41.487 08:55:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 405b01a2-4dc0-4579-bf94-77ba7b41c8d7 lbd_9 10 00:13:41.747 08:55:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a5907267-139f-434b-9e87-1a29fcfd1459 00:13:41.747 08:55:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a5907267-139f-434b-9e87-1a29fcfd1459:8 ' 00:13:41.747 08:55:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:41.747 08:55:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 405b01a2-4dc0-4579-bf94-77ba7b41c8d7 lbd_10 10 00:13:42.007 08:55:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7501a9db-fd3d-406c-9301-c163a77e619d 00:13:42.007 08:55:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7501a9db-fd3d-406c-9301-c163a77e619d:9 ' 00:13:42.007 08:55:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias '89ba734b-f467-4ef9-baef-ed3b407ad146:0 582c7f1e-7070-4a59-8f98-1f23100d8fe6:1 315f875f-aa93-4da2-a248-f78b6e435b65:2 13c6368d-a868-4c4c-976d-8dafa69790af:3 13c291db-680f-4713-ad39-96c841f6d9b9:4 52689e67-4f5f-41a8-8550-f4b05edb6a3b:5 b1795985-da72-4b45-8196-d2b612f42bed:6 1b94945f-bb69-4674-87d6-a75f380444ba:7 a5907267-139f-434b-9e87-1a29fcfd1459:8 7501a9db-fd3d-406c-9301-c163a77e619d:9 ' 1:12 256 -d 00:13:42.266 08:55:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@66 -- # timing_exit setup 00:13:42.266 08:55:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:42.266 08:55:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:42.266 08:55:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@68 -- # sleep 1 00:13:43.642 08:55:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@70 -- # timing_enter discovery 00:13:43.642 08:55:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:43.642 08:55:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:43.642 08:55:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@71 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:13:43.642 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:13:43.642 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:13:43.642 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:13:43.642 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:13:43.642 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:13:43.642 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:13:43.642 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:13:43.642 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:13:43.642 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:13:43.642 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:13:43.642 08:55:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@72 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:13:43.642 [2024-07-25 08:55:50.428489] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.441631] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.461543] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.492571] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.494942] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.518948] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.529280] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.533616] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.545691] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.565189] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.584650] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.584782] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.586948] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.588268] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.621857] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.666536] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.677153] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.690655] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.708506] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.722841] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.752440] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.759520] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.759559] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.642 [2024-07-25 08:55:50.760022] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.902 [2024-07-25 08:55:50.764562] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.902 [2024-07-25 08:55:50.827517] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.902 [2024-07-25 08:55:50.836858] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.902 [2024-07-25 08:55:50.847577] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.902 [2024-07-25 08:55:50.885595] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.902 [2024-07-25 08:55:50.945322] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.902 [2024-07-25 08:55:51.004707] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.160 [2024-07-25 08:55:51.029345] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.160 [2024-07-25 08:55:51.084978] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.160 [2024-07-25 08:55:51.090786] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.160 [2024-07-25 08:55:51.112867] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.160 [2024-07-25 08:55:51.118816] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.160 [2024-07-25 08:55:51.124267] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.160 [2024-07-25 08:55:51.135542] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.160 [2024-07-25 08:55:51.169826] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.160 [2024-07-25 08:55:51.176217] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.160 [2024-07-25 08:55:51.224452] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.160 [2024-07-25 08:55:51.232213] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.160 [2024-07-25 08:55:51.246427] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.160 [2024-07-25 08:55:51.251903] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.160 [2024-07-25 08:55:51.258709] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.160 [2024-07-25 08:55:51.259046] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.160 [2024-07-25 08:55:51.267303] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.419 [2024-07-25 08:55:51.282326] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.419 [2024-07-25 08:55:51.302875] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.419 [2024-07-25 08:55:51.332007] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.419 [2024-07-25 08:55:51.413803] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.419 [2024-07-25 08:55:51.420695] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.419 [2024-07-25 08:55:51.444029] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.419 [2024-07-25 08:55:51.481726] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.419 [2024-07-25 08:55:51.488350] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.419 [2024-07-25 08:55:51.495969] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.419 [2024-07-25 08:55:51.511511] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.419 [2024-07-25 08:55:51.512012] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.677 [2024-07-25 08:55:51.575109] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.677 [2024-07-25 08:55:51.587780] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.677 [2024-07-25 08:55:51.606586] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.677 [2024-07-25 08:55:51.616218] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.677 [2024-07-25 08:55:51.633952] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.677 [2024-07-25 08:55:51.633952] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.677 [2024-07-25 08:55:51.640374] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.677 [2024-07-25 08:55:51.669974] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.677 [2024-07-25 08:55:51.680177] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.677 [2024-07-25 08:55:51.715998] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.677 [2024-07-25 08:55:51.734895] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.677 [2024-07-25 08:55:51.734899] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.677 [2024-07-25 08:55:51.756910] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.677 [2024-07-25 08:55:51.756954] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.678 [2024-07-25 08:55:51.762109] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.678 [2024-07-25 08:55:51.773476] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.936 [2024-07-25 08:55:51.828985] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.936 [2024-07-25 08:55:51.833939] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.936 [2024-07-25 08:55:51.918469] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.936 [2024-07-25 08:55:51.921746] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.936 [2024-07-25 08:55:51.923420] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.937 [2024-07-25 08:55:51.942692] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.937 [2024-07-25 08:55:51.945886] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.937 [2024-07-25 08:55:51.961747] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.937 [2024-07-25 08:55:51.962579] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.937 [2024-07-25 08:55:51.983087] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.937 [2024-07-25 08:55:51.983629] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.937 [2024-07-25 08:55:52.003002] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.937 [2024-07-25 08:55:52.009377] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.937 [2024-07-25 08:55:52.012775] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.937 [2024-07-25 08:55:52.016518] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.937 [2024-07-25 08:55:52.022092] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.937 [2024-07-25 08:55:52.036576] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.937 [2024-07-25 08:55:52.046495] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:44.937 [2024-07-25 08:55:52.046551] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:45.196 [2024-07-25 08:55:52.073789] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:45.196 [2024-07-25 08:55:52.074108] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:45.196 [2024-07-25 08:55:52.086301] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:45.196 [2024-07-25 08:55:52.133172] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:45.196 [2024-07-25 08:55:52.169618] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:45.196 [2024-07-25 08:55:52.174748] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:45.196 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:13:45.196 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:13:45.196 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:13:45.196 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:13:45.196 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:13:45.196 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:13:45.196 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:13:45.196 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:13:45.196 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:13:45.196 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:13:45.196 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:13:45.196 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:13:45.196 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:13:45.196 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:13:45.196 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:13:45.196 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:13:45.196 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:13:45.196 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:13:45.196 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:13:45.196 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:13:45.196 08:55:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@73 -- # waitforiscsidevices 100 00:13:45.196 08:55:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@116 -- # local num=100 00:13:45.196 08:55:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:13:45.196 08:55:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:13:45.196 08:55:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:13:45.196 08:55:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:13:45.196 [2024-07-25 08:55:52.241435] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:45.196 08:55:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # n=100 00:13:45.196 08:55:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@120 -- # '[' 100 -ne 100 ']' 00:13:45.196 08:55:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@123 -- # return 0 00:13:45.196 08:55:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@74 -- # timing_exit discovery 00:13:45.196 08:55:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:45.196 08:55:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:45.196 08:55:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@76 -- # timing_enter fio 00:13:45.196 08:55:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:45.196 08:55:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:45.196 08:55:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 8 -t randwrite -r 10 -v 00:13:45.456 [global] 00:13:45.456 thread=1 00:13:45.456 invalidate=1 00:13:45.456 rw=randwrite 00:13:45.456 time_based=1 00:13:45.456 runtime=10 00:13:45.456 ioengine=libaio 00:13:45.456 direct=1 00:13:45.456 bs=131072 00:13:45.456 iodepth=8 00:13:45.456 norandommap=0 00:13:45.456 numjobs=1 00:13:45.456 00:13:45.456 verify_dump=1 00:13:45.456 verify_backlog=512 00:13:45.456 verify_state_save=0 00:13:45.456 do_verify=1 00:13:45.456 verify=crc32c-intel 00:13:45.456 [job0] 00:13:45.456 filename=/dev/sdc 00:13:45.456 [job1] 00:13:45.456 filename=/dev/sdf 00:13:45.456 [job2] 00:13:45.456 filename=/dev/sdh 00:13:45.456 [job3] 00:13:45.456 filename=/dev/sdj 00:13:45.456 [job4] 00:13:45.456 filename=/dev/sdk 00:13:45.456 [job5] 00:13:45.456 filename=/dev/sdo 00:13:45.456 [job6] 00:13:45.456 filename=/dev/sdr 00:13:45.456 [job7] 00:13:45.456 filename=/dev/sdu 00:13:45.456 [job8] 00:13:45.456 filename=/dev/sdw 00:13:45.456 [job9] 00:13:45.456 filename=/dev/sdaa 00:13:45.456 [job10] 00:13:45.456 filename=/dev/sdg 00:13:45.456 [job11] 00:13:45.456 filename=/dev/sdl 00:13:45.456 [job12] 00:13:45.456 filename=/dev/sdn 00:13:45.456 [job13] 00:13:45.456 filename=/dev/sdq 00:13:45.456 [job14] 00:13:45.456 filename=/dev/sdt 00:13:45.456 [job15] 00:13:45.456 filename=/dev/sdv 00:13:45.456 [job16] 00:13:45.456 filename=/dev/sdy 00:13:45.456 [job17] 00:13:45.456 filename=/dev/sdz 00:13:45.456 [job18] 00:13:45.456 filename=/dev/sdac 00:13:45.456 [job19] 00:13:45.456 filename=/dev/sdae 00:13:45.456 [job20] 00:13:45.456 filename=/dev/sdad 00:13:45.456 [job21] 00:13:45.456 filename=/dev/sdaf 00:13:45.456 [job22] 00:13:45.456 filename=/dev/sdah 00:13:45.456 [job23] 00:13:45.456 filename=/dev/sdaj 00:13:45.456 [job24] 00:13:45.456 filename=/dev/sdal 00:13:45.456 [job25] 00:13:45.456 filename=/dev/sdan 00:13:45.456 [job26] 00:13:45.456 filename=/dev/sdap 00:13:45.456 [job27] 00:13:45.456 filename=/dev/sdar 00:13:45.456 [job28] 00:13:45.456 filename=/dev/sdat 00:13:45.456 [job29] 00:13:45.456 filename=/dev/sdaw 00:13:45.456 [job30] 00:13:45.456 filename=/dev/sdag 00:13:45.456 [job31] 00:13:45.456 filename=/dev/sdai 00:13:45.456 [job32] 00:13:45.456 filename=/dev/sdak 00:13:45.456 [job33] 00:13:45.456 filename=/dev/sdam 00:13:45.456 [job34] 00:13:45.456 filename=/dev/sdao 00:13:45.456 [job35] 00:13:45.456 filename=/dev/sdaq 00:13:45.456 [job36] 00:13:45.456 filename=/dev/sdas 00:13:45.456 [job37] 00:13:45.456 filename=/dev/sdau 00:13:45.456 [job38] 00:13:45.456 filename=/dev/sdav 00:13:45.456 [job39] 00:13:45.456 filename=/dev/sdax 00:13:45.456 [job40] 00:13:45.456 filename=/dev/sday 00:13:45.456 [job41] 00:13:45.456 filename=/dev/sdaz 00:13:45.456 [job42] 00:13:45.456 filename=/dev/sdba 00:13:45.456 [job43] 00:13:45.456 filename=/dev/sdbb 00:13:45.456 [job44] 00:13:45.456 filename=/dev/sdbd 00:13:45.456 [job45] 00:13:45.456 filename=/dev/sdbe 00:13:45.456 [job46] 00:13:45.456 filename=/dev/sdbh 00:13:45.456 [job47] 00:13:45.456 filename=/dev/sdbj 00:13:45.456 [job48] 00:13:45.456 filename=/dev/sdbl 00:13:45.456 [job49] 00:13:45.456 filename=/dev/sdbr 00:13:45.456 [job50] 00:13:45.456 filename=/dev/sdbc 00:13:45.456 [job51] 00:13:45.456 filename=/dev/sdbf 00:13:45.456 [job52] 00:13:45.456 filename=/dev/sdbg 00:13:45.456 [job53] 00:13:45.456 filename=/dev/sdbi 00:13:45.456 [job54] 00:13:45.456 filename=/dev/sdbk 00:13:45.456 [job55] 00:13:45.456 filename=/dev/sdbm 00:13:45.456 [job56] 00:13:45.456 filename=/dev/sdbn 00:13:45.456 [job57] 00:13:45.456 filename=/dev/sdbo 00:13:45.456 [job58] 00:13:45.456 filename=/dev/sdbp 00:13:45.456 [job59] 00:13:45.456 filename=/dev/sdbu 00:13:45.456 [job60] 00:13:45.456 filename=/dev/sdbq 00:13:45.456 [job61] 00:13:45.456 filename=/dev/sdbs 00:13:45.456 [job62] 00:13:45.456 filename=/dev/sdbt 00:13:45.456 [job63] 00:13:45.456 filename=/dev/sdbv 00:13:45.456 [job64] 00:13:45.456 filename=/dev/sdbw 00:13:45.456 [job65] 00:13:45.456 filename=/dev/sdbx 00:13:45.456 [job66] 00:13:45.456 filename=/dev/sdby 00:13:45.456 [job67] 00:13:45.456 filename=/dev/sdbz 00:13:45.716 [job68] 00:13:45.716 filename=/dev/sdca 00:13:45.716 [job69] 00:13:45.716 filename=/dev/sdci 00:13:45.716 [job70] 00:13:45.716 filename=/dev/sdcc 00:13:45.716 [job71] 00:13:45.716 filename=/dev/sdcd 00:13:45.716 [job72] 00:13:45.716 filename=/dev/sdcg 00:13:45.716 [job73] 00:13:45.716 filename=/dev/sdck 00:13:45.716 [job74] 00:13:45.716 filename=/dev/sdcn 00:13:45.716 [job75] 00:13:45.716 filename=/dev/sdcp 00:13:45.716 [job76] 00:13:45.716 filename=/dev/sdcq 00:13:45.716 [job77] 00:13:45.716 filename=/dev/sdcs 00:13:45.716 [job78] 00:13:45.716 filename=/dev/sdct 00:13:45.716 [job79] 00:13:45.716 filename=/dev/sdcv 00:13:45.716 [job80] 00:13:45.716 filename=/dev/sdcb 00:13:45.716 [job81] 00:13:45.716 filename=/dev/sdce 00:13:45.716 [job82] 00:13:45.716 filename=/dev/sdcf 00:13:45.716 [job83] 00:13:45.716 filename=/dev/sdch 00:13:45.716 [job84] 00:13:45.716 filename=/dev/sdcj 00:13:45.716 [job85] 00:13:45.716 filename=/dev/sdcl 00:13:45.716 [job86] 00:13:45.716 filename=/dev/sdcm 00:13:45.716 [job87] 00:13:45.716 filename=/dev/sdco 00:13:45.716 [job88] 00:13:45.716 filename=/dev/sdcr 00:13:45.716 [job89] 00:13:45.716 filename=/dev/sdcu 00:13:45.716 [job90] 00:13:45.716 filename=/dev/sda 00:13:45.716 [job91] 00:13:45.716 filename=/dev/sdb 00:13:45.716 [job92] 00:13:45.716 filename=/dev/sdd 00:13:45.716 [job93] 00:13:45.716 filename=/dev/sde 00:13:45.716 [job94] 00:13:45.716 filename=/dev/sdi 00:13:45.716 [job95] 00:13:45.716 filename=/dev/sdm 00:13:45.716 [job96] 00:13:45.716 filename=/dev/sdp 00:13:45.716 [job97] 00:13:45.716 filename=/dev/sds 00:13:45.716 [job98] 00:13:45.716 filename=/dev/sdx 00:13:45.716 [job99] 00:13:45.716 filename=/dev/sdab 00:13:47.622 queue_depth set to 113 (sdc) 00:13:47.622 queue_depth set to 113 (sdf) 00:13:47.622 queue_depth set to 113 (sdh) 00:13:47.622 queue_depth set to 113 (sdj) 00:13:47.622 queue_depth set to 113 (sdk) 00:13:47.622 queue_depth set to 113 (sdo) 00:13:47.622 queue_depth set to 113 (sdr) 00:13:47.622 queue_depth set to 113 (sdu) 00:13:47.622 queue_depth set to 113 (sdw) 00:13:47.622 queue_depth set to 113 (sdaa) 00:13:47.881 queue_depth set to 113 (sdg) 00:13:47.881 queue_depth set to 113 (sdl) 00:13:47.881 queue_depth set to 113 (sdn) 00:13:47.881 queue_depth set to 113 (sdq) 00:13:47.881 queue_depth set to 113 (sdt) 00:13:47.881 queue_depth set to 113 (sdv) 00:13:47.881 queue_depth set to 113 (sdy) 00:13:47.881 queue_depth set to 113 (sdz) 00:13:47.881 queue_depth set to 113 (sdac) 00:13:47.881 queue_depth set to 113 (sdae) 00:13:47.881 queue_depth set to 113 (sdad) 00:13:47.881 queue_depth set to 113 (sdaf) 00:13:47.881 queue_depth set to 113 (sdah) 00:13:48.141 queue_depth set to 113 (sdaj) 00:13:48.141 queue_depth set to 113 (sdal) 00:13:48.141 queue_depth set to 113 (sdan) 00:13:48.141 queue_depth set to 113 (sdap) 00:13:48.141 queue_depth set to 113 (sdar) 00:13:48.141 queue_depth set to 113 (sdat) 00:13:48.141 queue_depth set to 113 (sdaw) 00:13:48.141 queue_depth set to 113 (sdag) 00:13:48.141 queue_depth set to 113 (sdai) 00:13:48.141 queue_depth set to 113 (sdak) 00:13:48.141 queue_depth set to 113 (sdam) 00:13:48.141 queue_depth set to 113 (sdao) 00:13:48.141 queue_depth set to 113 (sdaq) 00:13:48.400 queue_depth set to 113 (sdas) 00:13:48.400 queue_depth set to 113 (sdau) 00:13:48.400 queue_depth set to 113 (sdav) 00:13:48.400 queue_depth set to 113 (sdax) 00:13:48.400 queue_depth set to 113 (sday) 00:13:48.400 queue_depth set to 113 (sdaz) 00:13:48.400 queue_depth set to 113 (sdba) 00:13:48.400 queue_depth set to 113 (sdbb) 00:13:48.400 queue_depth set to 113 (sdbd) 00:13:48.400 queue_depth set to 113 (sdbe) 00:13:48.400 queue_depth set to 113 (sdbh) 00:13:48.400 queue_depth set to 113 (sdbj) 00:13:48.659 queue_depth set to 113 (sdbl) 00:13:48.659 queue_depth set to 113 (sdbr) 00:13:48.659 queue_depth set to 113 (sdbc) 00:13:48.659 queue_depth set to 113 (sdbf) 00:13:48.659 queue_depth set to 113 (sdbg) 00:13:48.659 queue_depth set to 113 (sdbi) 00:13:48.659 queue_depth set to 113 (sdbk) 00:13:48.659 queue_depth set to 113 (sdbm) 00:13:48.659 queue_depth set to 113 (sdbn) 00:13:48.659 queue_depth set to 113 (sdbo) 00:13:48.659 queue_depth set to 113 (sdbp) 00:13:48.659 queue_depth set to 113 (sdbu) 00:13:48.659 queue_depth set to 113 (sdbq) 00:13:48.926 queue_depth set to 113 (sdbs) 00:13:48.926 queue_depth set to 113 (sdbt) 00:13:48.926 queue_depth set to 113 (sdbv) 00:13:48.926 queue_depth set to 113 (sdbw) 00:13:48.926 queue_depth set to 113 (sdbx) 00:13:48.926 queue_depth set to 113 (sdby) 00:13:48.926 queue_depth set to 113 (sdbz) 00:13:48.926 queue_depth set to 113 (sdca) 00:13:48.926 queue_depth set to 113 (sdci) 00:13:48.926 queue_depth set to 113 (sdcc) 00:13:48.926 queue_depth set to 113 (sdcd) 00:13:49.191 queue_depth set to 113 (sdcg) 00:13:49.191 queue_depth set to 113 (sdck) 00:13:49.191 queue_depth set to 113 (sdcn) 00:13:49.191 queue_depth set to 113 (sdcp) 00:13:49.191 queue_depth set to 113 (sdcq) 00:13:49.191 queue_depth set to 113 (sdcs) 00:13:49.191 queue_depth set to 113 (sdct) 00:13:49.191 queue_depth set to 113 (sdcv) 00:13:49.191 queue_depth set to 113 (sdcb) 00:13:49.191 queue_depth set to 113 (sdce) 00:13:49.191 queue_depth set to 113 (sdcf) 00:13:49.191 queue_depth set to 113 (sdch) 00:13:49.191 queue_depth set to 113 (sdcj) 00:13:49.191 queue_depth set to 113 (sdcl) 00:13:49.450 queue_depth set to 113 (sdcm) 00:13:49.450 queue_depth set to 113 (sdco) 00:13:49.450 queue_depth set to 113 (sdcr) 00:13:49.450 queue_depth set to 113 (sdcu) 00:13:49.450 queue_depth set to 113 (sda) 00:13:49.450 queue_depth set to 113 (sdb) 00:13:49.450 queue_depth set to 113 (sdd) 00:13:49.450 queue_depth set to 113 (sde) 00:13:49.450 queue_depth set to 113 (sdi) 00:13:49.450 queue_depth set to 113 (sdm) 00:13:49.450 queue_depth set to 113 (sdp) 00:13:49.450 queue_depth set to 113 (sds) 00:13:49.710 queue_depth set to 113 (sdx) 00:13:49.710 queue_depth set to 113 (sdab) 00:13:49.710 job0: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job1: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job2: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job3: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job4: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job5: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job6: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job7: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job8: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job9: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job10: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job11: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job12: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job13: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job14: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job15: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job16: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job17: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job18: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job19: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job20: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job21: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job22: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job23: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job24: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job25: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job26: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job27: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job28: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job29: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job30: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job31: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job32: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job33: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.710 job34: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.969 job35: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.969 job36: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.969 job37: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.969 job38: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.969 job39: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.969 job40: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.969 job41: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.969 job42: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.969 job43: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.969 job44: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.969 job45: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job46: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job47: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job48: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job49: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job50: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job51: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job52: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job53: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job54: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job55: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job56: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job57: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job58: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job59: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job60: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job61: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job62: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job63: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job64: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job65: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job66: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job67: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job68: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job69: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job70: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job71: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job72: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job73: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job74: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job75: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job76: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job77: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job78: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job79: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job80: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job81: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job82: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job83: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job84: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job85: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job86: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job87: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job88: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job89: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job90: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job91: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job92: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job93: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job94: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job95: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job96: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job97: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job98: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 job99: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:49.970 fio-3.35 00:13:49.970 Starting 100 threads 00:13:49.970 [2024-07-25 08:55:57.023448] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.028327] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.032683] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.037814] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.041233] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.044515] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.047219] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.049965] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.052509] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.054982] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.057419] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.059507] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.061568] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.063525] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.065361] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.067042] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.068603] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.070496] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.072034] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.073704] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.075318] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.076952] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.078492] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.080024] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.081558] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.083174] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.084697] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.086253] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:49.970 [2024-07-25 08:55:57.087853] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.089402] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.090971] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.092466] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.093960] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.095418] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.099139] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.100679] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.102101] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.103550] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.105131] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.106691] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.108234] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.109870] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.111438] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.113063] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.114666] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.116264] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.117959] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.119956] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.121815] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.123604] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.125127] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.127415] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.129069] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.130667] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.132631] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.134215] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.135762] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.137247] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.138746] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.140389] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.142026] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.143751] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.145336] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.147038] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.148554] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.150522] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.153150] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.154666] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.156717] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.158229] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.159775] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.161334] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.162798] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.164567] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.166685] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.168074] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.169471] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.171352] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.172855] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.174336] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.175971] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.178095] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.180349] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.181897] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.183522] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.185154] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.186645] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.188214] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.189644] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.191196] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.192743] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.194239] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.196652] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.198141] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.199520] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.200910] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.202396] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.203856] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.205403] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.230 [2024-07-25 08:55:57.206767] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:53.529 [2024-07-25 08:56:00.567805] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:53.529 [2024-07-25 08:56:00.597529] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:53.788 [2024-07-25 08:56:00.670128] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:53.788 [2024-07-25 08:56:00.790852] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:53.788 [2024-07-25 08:56:00.868534] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:54.047 [2024-07-25 08:56:00.980467] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:54.047 [2024-07-25 08:56:01.079338] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:54.047 [2024-07-25 08:56:01.164794] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:54.306 [2024-07-25 08:56:01.318035] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:54.565 [2024-07-25 08:56:01.433918] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:54.565 [2024-07-25 08:56:01.548851] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:54.565 [2024-07-25 08:56:01.655502] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:54.824 [2024-07-25 08:56:01.789294] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:54.824 [2024-07-25 08:56:01.928702] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.083 [2024-07-25 08:56:01.999802] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.083 [2024-07-25 08:56:02.062257] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.083 [2024-07-25 08:56:02.144824] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.083 [2024-07-25 08:56:02.201937] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.342 [2024-07-25 08:56:02.232443] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.342 [2024-07-25 08:56:02.283319] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.342 [2024-07-25 08:56:02.335469] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.342 [2024-07-25 08:56:02.407715] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.342 [2024-07-25 08:56:02.441968] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.601 [2024-07-25 08:56:02.482719] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.601 [2024-07-25 08:56:02.517948] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.601 [2024-07-25 08:56:02.561797] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.601 [2024-07-25 08:56:02.638092] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.601 [2024-07-25 08:56:02.695885] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.859 [2024-07-25 08:56:02.743173] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.859 [2024-07-25 08:56:02.814511] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.859 [2024-07-25 08:56:02.854974] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:55.859 [2024-07-25 08:56:02.946896] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:56.118 [2024-07-25 08:56:03.052368] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:56.118 [2024-07-25 08:56:03.092992] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:56.118 [2024-07-25 08:56:03.159775] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:56.376 [2024-07-25 08:56:03.247259] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:56.376 [2024-07-25 08:56:03.303571] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:56.376 [2024-07-25 08:56:03.368053] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:56.376 [2024-07-25 08:56:03.495139] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:56.679 [2024-07-25 08:56:03.537587] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:56.679 [2024-07-25 08:56:03.570650] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:56.679 [2024-07-25 08:56:03.632504] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:56.679 [2024-07-25 08:56:03.780423] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:56.949 [2024-07-25 08:56:03.833404] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:56.949 [2024-07-25 08:56:03.904714] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:56.949 [2024-07-25 08:56:04.020413] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.207 [2024-07-25 08:56:04.117725] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.207 [2024-07-25 08:56:04.208380] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.207 [2024-07-25 08:56:04.323770] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.465 [2024-07-25 08:56:04.405574] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.465 [2024-07-25 08:56:04.491480] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.724 [2024-07-25 08:56:04.629131] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.724 [2024-07-25 08:56:04.705769] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.724 [2024-07-25 08:56:04.767474] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.983 [2024-07-25 08:56:04.870070] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.983 [2024-07-25 08:56:04.963476] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.983 [2024-07-25 08:56:05.049296] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.241 [2024-07-25 08:56:05.133728] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.241 [2024-07-25 08:56:05.223857] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.241 [2024-07-25 08:56:05.268814] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.241 [2024-07-25 08:56:05.305322] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.241 [2024-07-25 08:56:05.342228] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.499 [2024-07-25 08:56:05.425882] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.499 [2024-07-25 08:56:05.529048] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.499 [2024-07-25 08:56:05.576712] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.757 [2024-07-25 08:56:05.619390] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.757 [2024-07-25 08:56:05.693122] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.757 [2024-07-25 08:56:05.742609] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.757 [2024-07-25 08:56:05.808346] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.757 [2024-07-25 08:56:05.871242] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.015 [2024-07-25 08:56:05.913649] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.015 [2024-07-25 08:56:05.984183] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.015 [2024-07-25 08:56:06.108159] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.273 [2024-07-25 08:56:06.269957] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.273 [2024-07-25 08:56:06.330682] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.273 [2024-07-25 08:56:06.376138] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.531 [2024-07-25 08:56:06.489751] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.532 [2024-07-25 08:56:06.535502] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.532 [2024-07-25 08:56:06.608371] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.532 [2024-07-25 08:56:06.643251] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.790 [2024-07-25 08:56:06.845382] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.048 [2024-07-25 08:56:06.964718] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.048 [2024-07-25 08:56:07.003091] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.048 [2024-07-25 08:56:07.059539] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.048 [2024-07-25 08:56:07.131591] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.306 [2024-07-25 08:56:07.170945] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.306 [2024-07-25 08:56:07.213698] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.306 [2024-07-25 08:56:07.331837] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.564 [2024-07-25 08:56:07.463851] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.564 [2024-07-25 08:56:07.552799] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.564 [2024-07-25 08:56:07.600723] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.564 [2024-07-25 08:56:07.663448] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.822 [2024-07-25 08:56:07.714096] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.822 [2024-07-25 08:56:07.760431] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.822 [2024-07-25 08:56:07.845538] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.822 [2024-07-25 08:56:07.905213] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.080 [2024-07-25 08:56:07.988083] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.080 [2024-07-25 08:56:08.144665] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.338 [2024-07-25 08:56:08.216463] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.596 [2024-07-25 08:56:08.460445] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.596 [2024-07-25 08:56:08.566807] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.596 [2024-07-25 08:56:08.657889] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.854 [2024-07-25 08:56:08.716914] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.854 [2024-07-25 08:56:08.755071] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.854 [2024-07-25 08:56:08.812887] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.854 [2024-07-25 08:56:08.899646] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.111 [2024-07-25 08:56:09.000995] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.111 [2024-07-25 08:56:09.152795] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.370 [2024-07-25 08:56:09.231263] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.370 [2024-07-25 08:56:09.402551] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.517242] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.578789] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.610392] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.640739] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.644942] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.648538] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.651173] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.653927] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.656481] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.659072] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.661052] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.663172] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.665082] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.667030] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.672786] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.674431] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.676185] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.677889] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.679546] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.681003] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.682536] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.684246] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.686170] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.688048] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.689879] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 [2024-07-25 08:56:09.691289] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.639 00:14:02.639 job0: (groupid=0, jobs=1): err= 0: pid=69796: Thu Jul 25 08:56:09 2024 00:14:02.639 read: IOPS=78, BW=9.83MiB/s (10.3MB/s)(80.0MiB/8136msec) 00:14:02.639 slat (usec): min=4, max=1469, avg=73.77, stdev=156.70 00:14:02.639 clat (usec): min=7670, max=64741, avg=17903.62, stdev=8201.73 00:14:02.640 lat (usec): min=7678, max=64760, avg=17977.40, stdev=8208.27 00:14:02.640 clat percentiles (usec): 00:14:02.640 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[10683], 20.00th=[11731], 00:14:02.640 | 30.00th=[13173], 40.00th=[14091], 50.00th=[15795], 60.00th=[17433], 00:14:02.640 | 70.00th=[19268], 80.00th=[22414], 90.00th=[27132], 95.00th=[33817], 00:14:02.640 | 99.00th=[52167], 99.50th=[55313], 99.90th=[64750], 99.95th=[64750], 00:14:02.640 | 99.99th=[64750] 00:14:02.640 write: IOPS=84, BW=10.5MiB/s (11.1MB/s)(90.8MiB/8606msec); 0 zone resets 00:14:02.640 slat (usec): min=34, max=5232, avg=223.81, stdev=469.85 00:14:02.640 clat (msec): min=48, max=360, avg=93.87, stdev=44.99 00:14:02.640 lat (msec): min=48, max=360, avg=94.10, stdev=45.01 00:14:02.640 clat percentiles (msec): 00:14:02.640 | 1.00th=[ 54], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 65], 00:14:02.640 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 83], 00:14:02.640 | 70.00th=[ 94], 80.00th=[ 113], 90.00th=[ 155], 95.00th=[ 192], 00:14:02.640 | 99.00th=[ 259], 99.50th=[ 279], 99.90th=[ 359], 99.95th=[ 359], 00:14:02.640 | 99.99th=[ 359] 00:14:02.640 bw ( KiB/s): min= 1788, max=15360, per=0.89%, avg=9672.16, stdev=4505.42, samples=19 00:14:02.640 iops : min= 13, max= 120, avg=75.37, stdev=35.47, samples=19 00:14:02.640 lat (msec) : 10=2.12%, 20=31.99%, 50=12.30%, 100=40.19%, 250=12.45% 00:14:02.640 lat (msec) : 500=0.95% 00:14:02.640 cpu : usr=0.65%, sys=0.31%, ctx=2423, majf=0, minf=3 00:14:02.640 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.640 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.640 issued rwts: total=640,726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.640 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.640 job1: (groupid=0, jobs=1): err= 0: pid=69797: Thu Jul 25 08:56:09 2024 00:14:02.640 read: IOPS=77, BW=9904KiB/s (10.1MB/s)(80.0MiB/8271msec) 00:14:02.640 slat (usec): min=5, max=3302, avg=82.96, stdev=193.37 00:14:02.640 clat (usec): min=5936, max=58146, avg=17824.80, stdev=9610.49 00:14:02.640 lat (usec): min=5956, max=58155, avg=17907.75, stdev=9637.70 00:14:02.640 clat percentiles (usec): 00:14:02.640 | 1.00th=[ 6325], 5.00th=[ 7439], 10.00th=[ 9241], 20.00th=[10290], 00:14:02.640 | 30.00th=[10945], 40.00th=[11863], 50.00th=[15008], 60.00th=[18482], 00:14:02.640 | 70.00th=[20579], 80.00th=[23725], 90.00th=[32113], 95.00th=[38536], 00:14:02.640 | 99.00th=[47449], 99.50th=[49546], 99.90th=[57934], 99.95th=[57934], 00:14:02.640 | 99.99th=[57934] 00:14:02.640 write: IOPS=82, BW=10.3MiB/s (10.8MB/s)(89.4MiB/8642msec); 0 zone resets 00:14:02.640 slat (usec): min=40, max=8996, avg=212.14, stdev=471.68 00:14:02.640 clat (msec): min=21, max=357, avg=95.54, stdev=45.97 00:14:02.640 lat (msec): min=21, max=357, avg=95.76, stdev=45.94 00:14:02.640 clat percentiles (msec): 00:14:02.640 | 1.00th=[ 22], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 67], 00:14:02.640 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 89], 00:14:02.640 | 70.00th=[ 99], 80.00th=[ 116], 90.00th=[ 161], 95.00th=[ 201], 00:14:02.640 | 99.00th=[ 275], 99.50th=[ 300], 99.90th=[ 359], 99.95th=[ 359], 00:14:02.640 | 99.99th=[ 359] 00:14:02.640 bw ( KiB/s): min= 3065, max=16063, per=0.87%, avg=9530.00, stdev=4291.43, samples=19 00:14:02.640 iops : min= 23, max= 125, avg=74.16, stdev=33.53, samples=19 00:14:02.640 lat (msec) : 10=7.53%, 20=23.76%, 50=16.90%, 100=36.97%, 250=14.10% 00:14:02.640 lat (msec) : 500=0.74% 00:14:02.640 cpu : usr=0.75%, sys=0.29%, ctx=2389, majf=0, minf=3 00:14:02.640 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.640 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.640 issued rwts: total=640,715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.640 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.640 job2: (groupid=0, jobs=1): err= 0: pid=69800: Thu Jul 25 08:56:09 2024 00:14:02.640 read: IOPS=69, BW=8872KiB/s (9085kB/s)(69.1MiB/7978msec) 00:14:02.640 slat (usec): min=5, max=3204, avg=100.63, stdev=289.26 00:14:02.640 clat (msec): min=4, max=136, avg=22.68, stdev=17.95 00:14:02.640 lat (msec): min=4, max=136, avg=22.78, stdev=17.94 00:14:02.640 clat percentiles (msec): 00:14:02.640 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 12], 00:14:02.640 | 30.00th=[ 15], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 22], 00:14:02.640 | 70.00th=[ 24], 80.00th=[ 27], 90.00th=[ 41], 95.00th=[ 53], 00:14:02.640 | 99.00th=[ 104], 99.50th=[ 128], 99.90th=[ 138], 99.95th=[ 138], 00:14:02.640 | 99.99th=[ 138] 00:14:02.640 write: IOPS=76, BW=9728KiB/s (9962kB/s)(80.0MiB/8421msec); 0 zone resets 00:14:02.640 slat (usec): min=43, max=7225, avg=224.37, stdev=448.00 00:14:02.640 clat (msec): min=19, max=398, avg=104.02, stdev=49.80 00:14:02.640 lat (msec): min=20, max=398, avg=104.25, stdev=49.77 00:14:02.640 clat percentiles (msec): 00:14:02.640 | 1.00th=[ 44], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 71], 00:14:02.640 | 30.00th=[ 80], 40.00th=[ 89], 50.00th=[ 95], 60.00th=[ 101], 00:14:02.640 | 70.00th=[ 110], 80.00th=[ 124], 90.00th=[ 142], 95.00th=[ 182], 00:14:02.640 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 401], 99.95th=[ 401], 00:14:02.640 | 99.99th=[ 401] 00:14:02.640 bw ( KiB/s): min= 1788, max=14336, per=0.75%, avg=8190.20, stdev=4082.33, samples=20 00:14:02.640 iops : min= 13, max= 112, avg=63.90, stdev=31.99, samples=20 00:14:02.640 lat (msec) : 10=7.88%, 20=17.94%, 50=18.78%, 100=32.69%, 250=20.96% 00:14:02.640 lat (msec) : 500=1.76% 00:14:02.640 cpu : usr=0.86%, sys=0.24%, ctx=2035, majf=0, minf=7 00:14:02.640 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.640 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.640 issued rwts: total=553,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.640 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.640 job3: (groupid=0, jobs=1): err= 0: pid=69813: Thu Jul 25 08:56:09 2024 00:14:02.640 read: IOPS=78, BW=9.81MiB/s (10.3MB/s)(80.0MiB/8153msec) 00:14:02.640 slat (usec): min=4, max=1226, avg=64.41, stdev=124.44 00:14:02.640 clat (msec): min=3, max=146, avg=21.55, stdev=15.65 00:14:02.640 lat (msec): min=3, max=146, avg=21.62, stdev=15.65 00:14:02.640 clat percentiles (msec): 00:14:02.640 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 12], 00:14:02.640 | 30.00th=[ 15], 40.00th=[ 17], 50.00th=[ 19], 60.00th=[ 22], 00:14:02.640 | 70.00th=[ 24], 80.00th=[ 27], 90.00th=[ 36], 95.00th=[ 44], 00:14:02.640 | 99.00th=[ 90], 99.50th=[ 102], 99.90th=[ 148], 99.95th=[ 148], 00:14:02.640 | 99.99th=[ 148] 00:14:02.640 write: IOPS=79, BW=9.92MiB/s (10.4MB/s)(82.6MiB/8327msec); 0 zone resets 00:14:02.640 slat (usec): min=34, max=6542, avg=202.89, stdev=426.36 00:14:02.640 clat (msec): min=24, max=431, avg=99.75, stdev=53.25 00:14:02.640 lat (msec): min=24, max=431, avg=99.95, stdev=53.25 00:14:02.640 clat percentiles (msec): 00:14:02.640 | 1.00th=[ 32], 5.00th=[ 59], 10.00th=[ 62], 20.00th=[ 67], 00:14:02.640 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 95], 00:14:02.640 | 70.00th=[ 106], 80.00th=[ 122], 90.00th=[ 144], 95.00th=[ 186], 00:14:02.640 | 99.00th=[ 363], 99.50th=[ 376], 99.90th=[ 430], 99.95th=[ 430], 00:14:02.640 | 99.99th=[ 430] 00:14:02.640 bw ( KiB/s): min= 2560, max=14562, per=0.85%, avg=9288.83, stdev=3903.74, samples=18 00:14:02.640 iops : min= 20, max= 113, avg=72.33, stdev=30.37, samples=18 00:14:02.640 lat (msec) : 4=0.23%, 10=4.77%, 20=22.75%, 50=20.06%, 100=33.97% 00:14:02.640 lat (msec) : 250=16.76%, 500=1.46% 00:14:02.640 cpu : usr=0.63%, sys=0.33%, ctx=2319, majf=0, minf=3 00:14:02.640 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.640 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.640 issued rwts: total=640,661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.640 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.640 job4: (groupid=0, jobs=1): err= 0: pid=69816: Thu Jul 25 08:56:09 2024 00:14:02.640 read: IOPS=75, BW=9638KiB/s (9869kB/s)(80.0MiB/8500msec) 00:14:02.640 slat (usec): min=5, max=5814, avg=80.67, stdev=309.01 00:14:02.640 clat (usec): min=5049, max=61210, avg=13991.05, stdev=7437.69 00:14:02.640 lat (usec): min=5100, max=61217, avg=14071.72, stdev=7443.61 00:14:02.640 clat percentiles (usec): 00:14:02.641 | 1.00th=[ 6259], 5.00th=[ 6915], 10.00th=[ 7242], 20.00th=[ 8717], 00:14:02.641 | 30.00th=[10028], 40.00th=[11207], 50.00th=[12125], 60.00th=[13173], 00:14:02.641 | 70.00th=[15401], 80.00th=[17695], 90.00th=[21365], 95.00th=[26608], 00:14:02.641 | 99.00th=[49021], 99.50th=[53216], 99.90th=[61080], 99.95th=[61080], 00:14:02.641 | 99.99th=[61080] 00:14:02.641 write: IOPS=83, BW=10.4MiB/s (10.9MB/s)(93.0MiB/8946msec); 0 zone resets 00:14:02.641 slat (usec): min=30, max=7436, avg=183.59, stdev=367.50 00:14:02.641 clat (usec): min=1640, max=259248, avg=95380.80, stdev=37984.87 00:14:02.641 lat (usec): min=1686, max=260471, avg=95564.39, stdev=37987.76 00:14:02.641 clat percentiles (msec): 00:14:02.641 | 1.00th=[ 9], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 67], 00:14:02.641 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 95], 00:14:02.641 | 70.00th=[ 110], 80.00th=[ 132], 90.00th=[ 153], 95.00th=[ 169], 00:14:02.641 | 99.00th=[ 192], 99.50th=[ 207], 99.90th=[ 259], 99.95th=[ 259], 00:14:02.641 | 99.99th=[ 259] 00:14:02.641 bw ( KiB/s): min= 1788, max=16384, per=0.86%, avg=9425.45, stdev=4216.69, samples=20 00:14:02.641 iops : min= 13, max= 128, avg=73.35, stdev=33.02, samples=20 00:14:02.641 lat (msec) : 2=0.07%, 10=14.38%, 20=26.37%, 50=6.86%, 100=33.82% 00:14:02.641 lat (msec) : 250=18.42%, 500=0.07% 00:14:02.641 cpu : usr=0.62%, sys=0.39%, ctx=2326, majf=0, minf=1 00:14:02.641 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.641 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.641 issued rwts: total=640,744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.641 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.641 job5: (groupid=0, jobs=1): err= 0: pid=69817: Thu Jul 25 08:56:09 2024 00:14:02.641 read: IOPS=76, BW=9807KiB/s (10.0MB/s)(80.0MiB/8353msec) 00:14:02.641 slat (usec): min=4, max=2813, avg=76.69, stdev=218.27 00:14:02.641 clat (usec): min=6890, max=71972, avg=16543.75, stdev=8539.71 00:14:02.641 lat (usec): min=6985, max=72133, avg=16620.44, stdev=8581.60 00:14:02.641 clat percentiles (usec): 00:14:02.641 | 1.00th=[ 7308], 5.00th=[ 7832], 10.00th=[ 8356], 20.00th=[ 9634], 00:14:02.641 | 30.00th=[11207], 40.00th=[12256], 50.00th=[13960], 60.00th=[16450], 00:14:02.641 | 70.00th=[19268], 80.00th=[22414], 90.00th=[25822], 95.00th=[30540], 00:14:02.641 | 99.00th=[49546], 99.50th=[50070], 99.90th=[71828], 99.95th=[71828], 00:14:02.641 | 99.99th=[71828] 00:14:02.641 write: IOPS=85, BW=10.6MiB/s (11.1MB/s)(92.5MiB/8703msec); 0 zone resets 00:14:02.641 slat (usec): min=34, max=3973, avg=173.79, stdev=283.20 00:14:02.641 clat (msec): min=44, max=276, avg=93.34, stdev=38.88 00:14:02.641 lat (msec): min=44, max=276, avg=93.52, stdev=38.90 00:14:02.641 clat percentiles (msec): 00:14:02.641 | 1.00th=[ 53], 5.00th=[ 58], 10.00th=[ 58], 20.00th=[ 65], 00:14:02.641 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 86], 00:14:02.641 | 70.00th=[ 100], 80.00th=[ 118], 90.00th=[ 157], 95.00th=[ 176], 00:14:02.641 | 99.00th=[ 232], 99.50th=[ 249], 99.90th=[ 275], 99.95th=[ 275], 00:14:02.641 | 99.99th=[ 275] 00:14:02.641 bw ( KiB/s): min= 2816, max=15360, per=0.86%, avg=9378.40, stdev=4423.30, samples=20 00:14:02.641 iops : min= 22, max= 120, avg=73.05, stdev=34.53, samples=20 00:14:02.641 lat (msec) : 10=10.51%, 20=22.83%, 50=13.26%, 100=37.90%, 250=15.36% 00:14:02.641 lat (msec) : 500=0.14% 00:14:02.641 cpu : usr=0.79%, sys=0.28%, ctx=2187, majf=0, minf=5 00:14:02.641 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.641 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.641 issued rwts: total=640,740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.641 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.641 job6: (groupid=0, jobs=1): err= 0: pid=69869: Thu Jul 25 08:56:09 2024 00:14:02.641 read: IOPS=77, BW=9857KiB/s (10.1MB/s)(80.0MiB/8311msec) 00:14:02.641 slat (usec): min=4, max=3903, avg=91.18, stdev=245.44 00:14:02.641 clat (usec): min=2861, max=72177, avg=16899.87, stdev=8922.88 00:14:02.641 lat (usec): min=6765, max=72203, avg=16991.05, stdev=8931.75 00:14:02.641 clat percentiles (usec): 00:14:02.641 | 1.00th=[ 7177], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10552], 00:14:02.641 | 30.00th=[11600], 40.00th=[12649], 50.00th=[13960], 60.00th=[16450], 00:14:02.641 | 70.00th=[18744], 80.00th=[21103], 90.00th=[27919], 95.00th=[33424], 00:14:02.641 | 99.00th=[49021], 99.50th=[55313], 99.90th=[71828], 99.95th=[71828], 00:14:02.641 | 99.99th=[71828] 00:14:02.641 write: IOPS=81, BW=10.2MiB/s (10.6MB/s)(88.5MiB/8715msec); 0 zone resets 00:14:02.641 slat (usec): min=42, max=23371, avg=217.23, stdev=935.74 00:14:02.641 clat (msec): min=3, max=408, avg=97.48, stdev=52.01 00:14:02.641 lat (msec): min=3, max=409, avg=97.69, stdev=51.97 00:14:02.641 clat percentiles (msec): 00:14:02.641 | 1.00th=[ 32], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 65], 00:14:02.641 | 30.00th=[ 71], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 89], 00:14:02.641 | 70.00th=[ 97], 80.00th=[ 115], 90.00th=[ 167], 95.00th=[ 211], 00:14:02.641 | 99.00th=[ 292], 99.50th=[ 342], 99.90th=[ 409], 99.95th=[ 409], 00:14:02.641 | 99.99th=[ 409] 00:14:02.641 bw ( KiB/s): min= 256, max=15104, per=0.82%, avg=8951.30, stdev=4930.28, samples=20 00:14:02.641 iops : min= 2, max= 118, avg=69.65, stdev=38.48, samples=20 00:14:02.641 lat (msec) : 4=0.22%, 10=7.27%, 20=28.56%, 50=11.87%, 100=37.61% 00:14:02.641 lat (msec) : 250=12.91%, 500=1.56% 00:14:02.641 cpu : usr=0.62%, sys=0.32%, ctx=2383, majf=0, minf=5 00:14:02.641 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.641 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.641 issued rwts: total=640,708,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.641 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.641 job7: (groupid=0, jobs=1): err= 0: pid=69893: Thu Jul 25 08:56:09 2024 00:14:02.641 read: IOPS=67, BW=8624KiB/s (8831kB/s)(60.0MiB/7124msec) 00:14:02.641 slat (usec): min=6, max=1058, avg=63.16, stdev=125.59 00:14:02.641 clat (msec): min=3, max=420, avg=23.70, stdev=58.41 00:14:02.641 lat (msec): min=3, max=420, avg=23.77, stdev=58.41 00:14:02.641 clat percentiles (msec): 00:14:02.641 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:14:02.641 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 12], 00:14:02.641 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 20], 95.00th=[ 92], 00:14:02.641 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 422], 99.95th=[ 422], 00:14:02.641 | 99.99th=[ 422] 00:14:02.641 write: IOPS=71, BW=9115KiB/s (9334kB/s)(76.8MiB/8622msec); 0 zone resets 00:14:02.641 slat (usec): min=30, max=4697, avg=198.61, stdev=398.46 00:14:02.641 clat (msec): min=52, max=297, avg=111.69, stdev=44.08 00:14:02.641 lat (msec): min=52, max=297, avg=111.89, stdev=44.09 00:14:02.641 clat percentiles (msec): 00:14:02.641 | 1.00th=[ 57], 5.00th=[ 62], 10.00th=[ 65], 20.00th=[ 69], 00:14:02.641 | 30.00th=[ 80], 40.00th=[ 93], 50.00th=[ 103], 60.00th=[ 120], 00:14:02.641 | 70.00th=[ 132], 80.00th=[ 148], 90.00th=[ 167], 95.00th=[ 190], 00:14:02.641 | 99.00th=[ 257], 99.50th=[ 275], 99.90th=[ 296], 99.95th=[ 296], 00:14:02.641 | 99.99th=[ 296] 00:14:02.641 bw ( KiB/s): min= 4104, max=15360, per=0.79%, avg=8629.89, stdev=3394.10, samples=18 00:14:02.641 iops : min= 32, max= 120, avg=67.22, stdev=26.60, samples=18 00:14:02.641 lat (msec) : 4=0.09%, 10=20.57%, 20=19.38%, 50=1.37%, 100=27.24% 00:14:02.641 lat (msec) : 250=29.25%, 500=2.10% 00:14:02.641 cpu : usr=0.62%, sys=0.26%, ctx=1869, majf=0, minf=5 00:14:02.641 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.641 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.641 issued rwts: total=480,614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.641 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.641 job8: (groupid=0, jobs=1): err= 0: pid=69905: Thu Jul 25 08:56:09 2024 00:14:02.641 read: IOPS=75, BW=9617KiB/s (9848kB/s)(80.0MiB/8518msec) 00:14:02.641 slat (usec): min=4, max=1166, avg=55.35, stdev=113.28 00:14:02.641 clat (msec): min=4, max=126, avg=13.94, stdev=13.20 00:14:02.641 lat (msec): min=5, max=126, avg=13.99, stdev=13.20 00:14:02.641 clat percentiles (msec): 00:14:02.641 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:14:02.641 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:14:02.641 | 70.00th=[ 14], 80.00th=[ 18], 90.00th=[ 21], 95.00th=[ 26], 00:14:02.641 | 99.00th=[ 116], 99.50th=[ 122], 99.90th=[ 127], 99.95th=[ 127], 00:14:02.641 | 99.99th=[ 127] 00:14:02.641 write: IOPS=84, BW=10.5MiB/s (11.0MB/s)(94.2MiB/8945msec); 0 zone resets 00:14:02.641 slat (usec): min=38, max=2517, avg=160.30, stdev=199.66 00:14:02.641 clat (msec): min=9, max=263, avg=94.17, stdev=37.98 00:14:02.641 lat (msec): min=9, max=263, avg=94.33, stdev=37.98 00:14:02.641 clat percentiles (msec): 00:14:02.641 | 1.00th=[ 15], 5.00th=[ 58], 10.00th=[ 60], 20.00th=[ 67], 00:14:02.642 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 91], 00:14:02.642 | 70.00th=[ 102], 80.00th=[ 128], 90.00th=[ 150], 95.00th=[ 171], 00:14:02.642 | 99.00th=[ 213], 99.50th=[ 239], 99.90th=[ 264], 99.95th=[ 264], 00:14:02.642 | 99.99th=[ 264] 00:14:02.642 bw ( KiB/s): min= 1788, max=15553, per=0.88%, avg=9554.70, stdev=4041.66, samples=20 00:14:02.642 iops : min= 13, max= 121, avg=74.35, stdev=31.62, samples=20 00:14:02.642 lat (msec) : 10=17.93%, 20=23.31%, 50=5.24%, 100=35.94%, 250=17.43% 00:14:02.642 lat (msec) : 500=0.14% 00:14:02.642 cpu : usr=0.72%, sys=0.20%, ctx=2397, majf=0, minf=3 00:14:02.642 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.642 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.642 issued rwts: total=640,754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.642 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.642 job9: (groupid=0, jobs=1): err= 0: pid=69979: Thu Jul 25 08:56:09 2024 00:14:02.642 read: IOPS=62, BW=8037KiB/s (8230kB/s)(60.0MiB/7645msec) 00:14:02.642 slat (usec): min=4, max=6420, avg=81.02, stdev=341.80 00:14:02.642 clat (msec): min=3, max=206, avg=25.86, stdev=36.39 00:14:02.642 lat (msec): min=3, max=206, avg=25.94, stdev=36.39 00:14:02.642 clat percentiles (msec): 00:14:02.642 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 12], 00:14:02.642 | 30.00th=[ 13], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 18], 00:14:02.642 | 70.00th=[ 21], 80.00th=[ 26], 90.00th=[ 40], 95.00th=[ 57], 00:14:02.642 | 99.00th=[ 205], 99.50th=[ 205], 99.90th=[ 207], 99.95th=[ 207], 00:14:02.642 | 99.99th=[ 207] 00:14:02.642 write: IOPS=71, BW=9145KiB/s (9364kB/s)(75.8MiB/8482msec); 0 zone resets 00:14:02.642 slat (usec): min=37, max=3038, avg=182.94, stdev=238.47 00:14:02.642 clat (msec): min=52, max=336, avg=111.15, stdev=44.83 00:14:02.642 lat (msec): min=52, max=336, avg=111.33, stdev=44.83 00:14:02.642 clat percentiles (msec): 00:14:02.642 | 1.00th=[ 56], 5.00th=[ 61], 10.00th=[ 65], 20.00th=[ 74], 00:14:02.642 | 30.00th=[ 82], 40.00th=[ 94], 50.00th=[ 103], 60.00th=[ 113], 00:14:02.642 | 70.00th=[ 127], 80.00th=[ 140], 90.00th=[ 167], 95.00th=[ 186], 00:14:02.642 | 99.00th=[ 284], 99.50th=[ 309], 99.90th=[ 338], 99.95th=[ 338], 00:14:02.642 | 99.99th=[ 338] 00:14:02.642 bw ( KiB/s): min= 768, max=14592, per=0.74%, avg=8069.26, stdev=3486.15, samples=19 00:14:02.642 iops : min= 6, max= 114, avg=62.89, stdev=27.28, samples=19 00:14:02.642 lat (msec) : 4=0.46%, 10=7.18%, 20=22.38%, 50=11.79%, 100=26.43% 00:14:02.642 lat (msec) : 250=30.66%, 500=1.10% 00:14:02.642 cpu : usr=0.43%, sys=0.34%, ctx=1980, majf=0, minf=4 00:14:02.642 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.642 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.642 issued rwts: total=480,606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.642 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.642 job10: (groupid=0, jobs=1): err= 0: pid=70088: Thu Jul 25 08:56:09 2024 00:14:02.642 read: IOPS=101, BW=12.7MiB/s (13.3MB/s)(100MiB/7872msec) 00:14:02.642 slat (usec): min=5, max=1891, avg=58.55, stdev=144.17 00:14:02.642 clat (usec): min=1911, max=138560, avg=12590.37, stdev=14241.47 00:14:02.642 lat (msec): min=2, max=138, avg=12.65, stdev=14.24 00:14:02.642 clat percentiles (msec): 00:14:02.642 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:14:02.642 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:14:02.642 | 70.00th=[ 13], 80.00th=[ 16], 90.00th=[ 23], 95.00th=[ 34], 00:14:02.642 | 99.00th=[ 81], 99.50th=[ 116], 99.90th=[ 140], 99.95th=[ 140], 00:14:02.642 | 99.99th=[ 140] 00:14:02.642 write: IOPS=97, BW=12.2MiB/s (12.8MB/s)(107MiB/8758msec); 0 zone resets 00:14:02.642 slat (usec): min=37, max=5295, avg=205.65, stdev=370.09 00:14:02.642 clat (msec): min=33, max=244, avg=81.49, stdev=33.72 00:14:02.642 lat (msec): min=36, max=244, avg=81.69, stdev=33.71 00:14:02.642 clat percentiles (msec): 00:14:02.642 | 1.00th=[ 41], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 53], 00:14:02.642 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 81], 00:14:02.642 | 70.00th=[ 99], 80.00th=[ 113], 90.00th=[ 129], 95.00th=[ 144], 00:14:02.642 | 99.00th=[ 171], 99.50th=[ 194], 99.90th=[ 245], 99.95th=[ 245], 00:14:02.642 | 99.99th=[ 245] 00:14:02.642 bw ( KiB/s): min= 1795, max=20480, per=0.99%, avg=10838.90, stdev=4587.51, samples=20 00:14:02.642 iops : min= 14, max= 160, avg=84.55, stdev=35.83, samples=20 00:14:02.642 lat (msec) : 2=0.06%, 4=2.24%, 10=27.69%, 20=11.61%, 50=14.03% 00:14:02.642 lat (msec) : 100=29.20%, 250=15.18% 00:14:02.642 cpu : usr=0.89%, sys=0.36%, ctx=2822, majf=0, minf=3 00:14:02.642 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.642 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.642 issued rwts: total=800,854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.642 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.642 job11: (groupid=0, jobs=1): err= 0: pid=70283: Thu Jul 25 08:56:09 2024 00:14:02.642 read: IOPS=110, BW=13.8MiB/s (14.4MB/s)(120MiB/8723msec) 00:14:02.642 slat (usec): min=4, max=4160, avg=77.41, stdev=221.00 00:14:02.642 clat (msec): min=5, max=150, avg=15.53, stdev=12.70 00:14:02.642 lat (msec): min=5, max=150, avg=15.61, stdev=12.74 00:14:02.642 clat percentiles (msec): 00:14:02.642 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:14:02.642 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:14:02.642 | 70.00th=[ 16], 80.00th=[ 20], 90.00th=[ 25], 95.00th=[ 30], 00:14:02.642 | 99.00th=[ 55], 99.50th=[ 131], 99.90th=[ 150], 99.95th=[ 150], 00:14:02.642 | 99.99th=[ 150] 00:14:02.642 write: IOPS=121, BW=15.2MiB/s (16.0MB/s)(124MiB/8118msec); 0 zone resets 00:14:02.642 slat (usec): min=33, max=4916, avg=181.17, stdev=329.07 00:14:02.642 clat (msec): min=34, max=215, avg=64.35, stdev=27.42 00:14:02.642 lat (msec): min=34, max=215, avg=64.53, stdev=27.43 00:14:02.642 clat percentiles (msec): 00:14:02.642 | 1.00th=[ 39], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 47], 00:14:02.642 | 30.00th=[ 51], 40.00th=[ 54], 50.00th=[ 56], 60.00th=[ 59], 00:14:02.642 | 70.00th=[ 64], 80.00th=[ 74], 90.00th=[ 99], 95.00th=[ 124], 00:14:02.642 | 99.00th=[ 171], 99.50th=[ 190], 99.90th=[ 215], 99.95th=[ 215], 00:14:02.642 | 99.99th=[ 215] 00:14:02.642 bw ( KiB/s): min= 5632, max=22060, per=1.21%, avg=13242.89, stdev=5297.46, samples=19 00:14:02.642 iops : min= 44, max= 172, avg=103.32, stdev=41.39, samples=19 00:14:02.642 lat (msec) : 10=13.79%, 20=26.92%, 50=22.97%, 100=31.08%, 250=5.23% 00:14:02.642 cpu : usr=0.98%, sys=0.42%, ctx=3359, majf=0, minf=3 00:14:02.642 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.642 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.642 issued rwts: total=960,990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.642 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.642 job12: (groupid=0, jobs=1): err= 0: pid=70385: Thu Jul 25 08:56:09 2024 00:14:02.642 read: IOPS=109, BW=13.7MiB/s (14.4MB/s)(120MiB/8766msec) 00:14:02.642 slat (usec): min=4, max=1463, avg=55.52, stdev=129.48 00:14:02.642 clat (usec): min=2757, max=48542, avg=11264.21, stdev=6433.89 00:14:02.642 lat (usec): min=2801, max=48559, avg=11319.73, stdev=6427.12 00:14:02.642 clat percentiles (usec): 00:14:02.642 | 1.00th=[ 3916], 5.00th=[ 4883], 10.00th=[ 5407], 20.00th=[ 6587], 00:14:02.642 | 30.00th=[ 7439], 40.00th=[ 8291], 50.00th=[ 9110], 60.00th=[10290], 00:14:02.642 | 70.00th=[12780], 80.00th=[15139], 90.00th=[19006], 95.00th=[23987], 00:14:02.642 | 99.00th=[35390], 99.50th=[40633], 99.90th=[48497], 99.95th=[48497], 00:14:02.642 | 99.99th=[48497] 00:14:02.642 write: IOPS=114, BW=14.4MiB/s (15.1MB/s)(125MiB/8698msec); 0 zone resets 00:14:02.642 slat (usec): min=35, max=20104, avg=196.03, stdev=686.05 00:14:02.642 clat (msec): min=12, max=304, avg=68.94, stdev=34.32 00:14:02.642 lat (msec): min=12, max=304, avg=69.14, stdev=34.31 00:14:02.642 clat percentiles (msec): 00:14:02.642 | 1.00th=[ 18], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 45], 00:14:02.642 | 30.00th=[ 49], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 65], 00:14:02.642 | 70.00th=[ 73], 80.00th=[ 92], 90.00th=[ 113], 95.00th=[ 130], 00:14:02.642 | 99.00th=[ 215], 99.50th=[ 279], 99.90th=[ 305], 99.95th=[ 305], 00:14:02.642 | 99.99th=[ 305] 00:14:02.643 bw ( KiB/s): min= 1280, max=22272, per=1.16%, avg=12679.45, stdev=6083.73, samples=20 00:14:02.643 iops : min= 10, max= 174, avg=99.00, stdev=47.51, samples=20 00:14:02.643 lat (msec) : 4=0.56%, 10=28.08%, 20=16.79%, 50=20.62%, 100=26.29% 00:14:02.643 lat (msec) : 250=7.35%, 500=0.31% 00:14:02.643 cpu : usr=0.92%, sys=0.48%, ctx=3216, majf=0, minf=3 00:14:02.643 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.643 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.643 issued rwts: total=960,999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.643 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.643 job13: (groupid=0, jobs=1): err= 0: pid=70424: Thu Jul 25 08:56:09 2024 00:14:02.643 read: IOPS=95, BW=11.9MiB/s (12.5MB/s)(104MiB/8684msec) 00:14:02.643 slat (usec): min=4, max=2133, avg=58.37, stdev=154.76 00:14:02.643 clat (usec): min=3329, max=54587, avg=13621.33, stdev=7928.60 00:14:02.643 lat (usec): min=3378, max=54708, avg=13679.69, stdev=7936.59 00:14:02.643 clat percentiles (usec): 00:14:02.643 | 1.00th=[ 3916], 5.00th=[ 5342], 10.00th=[ 6587], 20.00th=[ 7504], 00:14:02.643 | 30.00th=[ 8717], 40.00th=[ 9896], 50.00th=[11207], 60.00th=[13173], 00:14:02.643 | 70.00th=[15926], 80.00th=[18482], 90.00th=[22414], 95.00th=[28443], 00:14:02.643 | 99.00th=[46400], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:14:02.643 | 99.99th=[54789] 00:14:02.643 write: IOPS=112, BW=14.0MiB/s (14.7MB/s)(120MiB/8567msec); 0 zone resets 00:14:02.643 slat (usec): min=30, max=6789, avg=227.75, stdev=517.22 00:14:02.643 clat (msec): min=27, max=300, avg=70.55, stdev=31.77 00:14:02.643 lat (msec): min=27, max=300, avg=70.78, stdev=31.77 00:14:02.643 clat percentiles (msec): 00:14:02.643 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 45], 20.00th=[ 49], 00:14:02.643 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 67], 00:14:02.643 | 70.00th=[ 75], 80.00th=[ 88], 90.00th=[ 108], 95.00th=[ 124], 00:14:02.643 | 99.00th=[ 194], 99.50th=[ 245], 99.90th=[ 300], 99.95th=[ 300], 00:14:02.643 | 99.99th=[ 300] 00:14:02.643 bw ( KiB/s): min= 2048, max=21760, per=1.13%, avg=12353.74, stdev=4976.70, samples=19 00:14:02.643 iops : min= 16, max= 170, avg=96.42, stdev=38.81, samples=19 00:14:02.643 lat (msec) : 4=0.56%, 10=18.32%, 20=19.66%, 50=19.94%, 100=34.75% 00:14:02.643 lat (msec) : 250=6.59%, 500=0.17% 00:14:02.643 cpu : usr=0.84%, sys=0.41%, ctx=3060, majf=0, minf=3 00:14:02.643 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.643 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.643 issued rwts: total=830,960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.643 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.643 job14: (groupid=0, jobs=1): err= 0: pid=70456: Thu Jul 25 08:56:09 2024 00:14:02.643 read: IOPS=97, BW=12.2MiB/s (12.8MB/s)(100MiB/8179msec) 00:14:02.643 slat (usec): min=5, max=1762, avg=61.54, stdev=133.44 00:14:02.643 clat (msec): min=3, max=145, avg=13.57, stdev=15.97 00:14:02.643 lat (msec): min=3, max=146, avg=13.63, stdev=15.98 00:14:02.643 clat percentiles (msec): 00:14:02.643 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:14:02.643 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 10], 60.00th=[ 11], 00:14:02.643 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 22], 95.00th=[ 42], 00:14:02.643 | 99.00th=[ 75], 99.50th=[ 130], 99.90th=[ 146], 99.95th=[ 146], 00:14:02.643 | 99.99th=[ 146] 00:14:02.643 write: IOPS=103, BW=13.0MiB/s (13.6MB/s)(112MiB/8648msec); 0 zone resets 00:14:02.643 slat (usec): min=34, max=15763, avg=206.11, stdev=607.21 00:14:02.643 clat (msec): min=33, max=307, avg=76.42, stdev=38.49 00:14:02.643 lat (msec): min=33, max=309, avg=76.62, stdev=38.54 00:14:02.643 clat percentiles (msec): 00:14:02.643 | 1.00th=[ 37], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 48], 00:14:02.643 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 65], 60.00th=[ 74], 00:14:02.643 | 70.00th=[ 87], 80.00th=[ 99], 90.00th=[ 126], 95.00th=[ 140], 00:14:02.643 | 99.00th=[ 255], 99.50th=[ 292], 99.90th=[ 309], 99.95th=[ 309], 00:14:02.643 | 99.99th=[ 309] 00:14:02.643 bw ( KiB/s): min= 4352, max=21418, per=1.04%, avg=11360.05, stdev=5009.43, samples=20 00:14:02.643 iops : min= 34, max= 167, avg=88.60, stdev=39.13, samples=20 00:14:02.643 lat (msec) : 4=1.89%, 10=24.71%, 20=14.62%, 50=16.51%, 100=32.13% 00:14:02.643 lat (msec) : 250=9.61%, 500=0.53% 00:14:02.643 cpu : usr=0.84%, sys=0.33%, ctx=3010, majf=0, minf=1 00:14:02.643 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.643 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.643 issued rwts: total=800,896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.643 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.643 job15: (groupid=0, jobs=1): err= 0: pid=70457: Thu Jul 25 08:56:09 2024 00:14:02.643 read: IOPS=106, BW=13.3MiB/s (14.0MB/s)(120MiB/8997msec) 00:14:02.643 slat (usec): min=4, max=1078, avg=65.58, stdev=131.91 00:14:02.643 clat (msec): min=2, max=122, avg=11.19, stdev=11.50 00:14:02.643 lat (msec): min=2, max=122, avg=11.26, stdev=11.49 00:14:02.643 clat percentiles (msec): 00:14:02.643 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:14:02.643 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:14:02.643 | 70.00th=[ 11], 80.00th=[ 13], 90.00th=[ 19], 95.00th=[ 27], 00:14:02.643 | 99.00th=[ 63], 99.50th=[ 102], 99.90th=[ 124], 99.95th=[ 124], 00:14:02.643 | 99.99th=[ 124] 00:14:02.643 write: IOPS=115, BW=14.5MiB/s (15.2MB/s)(125MiB/8627msec); 0 zone resets 00:14:02.643 slat (usec): min=32, max=5951, avg=197.35, stdev=375.37 00:14:02.643 clat (usec): min=324, max=197905, avg=68415.06, stdev=34654.41 00:14:02.643 lat (usec): min=1437, max=197981, avg=68612.41, stdev=34673.65 00:14:02.643 clat percentiles (msec): 00:14:02.643 | 1.00th=[ 6], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 46], 00:14:02.643 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 64], 00:14:02.643 | 70.00th=[ 75], 80.00th=[ 93], 90.00th=[ 121], 95.00th=[ 142], 00:14:02.643 | 99.00th=[ 174], 99.50th=[ 186], 99.90th=[ 199], 99.95th=[ 199], 00:14:02.643 | 99.99th=[ 199] 00:14:02.643 bw ( KiB/s): min= 510, max=26368, per=1.16%, avg=12678.05, stdev=6520.79, samples=20 00:14:02.643 iops : min= 3, max= 206, avg=98.75, stdev=51.17, samples=20 00:14:02.643 lat (usec) : 500=0.05% 00:14:02.643 lat (msec) : 4=0.77%, 10=33.23%, 20=13.07%, 50=18.84%, 100=24.66% 00:14:02.643 lat (msec) : 250=9.39% 00:14:02.643 cpu : usr=0.93%, sys=0.44%, ctx=3241, majf=0, minf=3 00:14:02.643 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.643 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.643 issued rwts: total=960,999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.643 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.643 job16: (groupid=0, jobs=1): err= 0: pid=70459: Thu Jul 25 08:56:09 2024 00:14:02.643 read: IOPS=108, BW=13.6MiB/s (14.3MB/s)(117MiB/8574msec) 00:14:02.643 slat (usec): min=4, max=3880, avg=77.82, stdev=210.11 00:14:02.643 clat (usec): min=4969, max=62737, avg=15380.38, stdev=8677.12 00:14:02.643 lat (usec): min=5021, max=62757, avg=15458.20, stdev=8672.12 00:14:02.643 clat percentiles (usec): 00:14:02.643 | 1.00th=[ 5866], 5.00th=[ 6980], 10.00th=[ 7963], 20.00th=[ 9110], 00:14:02.643 | 30.00th=[ 9896], 40.00th=[11600], 50.00th=[12911], 60.00th=[14877], 00:14:02.643 | 70.00th=[17171], 80.00th=[19530], 90.00th=[25297], 95.00th=[33162], 00:14:02.643 | 99.00th=[51643], 99.50th=[57410], 99.90th=[62653], 99.95th=[62653], 00:14:02.643 | 99.99th=[62653] 00:14:02.643 write: IOPS=117, BW=14.7MiB/s (15.5MB/s)(120MiB/8137msec); 0 zone resets 00:14:02.643 slat (usec): min=41, max=3102, avg=180.39, stdev=283.54 00:14:02.643 clat (msec): min=20, max=367, avg=67.15, stdev=35.12 00:14:02.643 lat (msec): min=20, max=368, avg=67.33, stdev=35.12 00:14:02.643 clat percentiles (msec): 00:14:02.643 | 1.00th=[ 35], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 47], 00:14:02.643 | 30.00th=[ 50], 40.00th=[ 53], 50.00th=[ 56], 60.00th=[ 62], 00:14:02.643 | 70.00th=[ 68], 80.00th=[ 80], 90.00th=[ 101], 95.00th=[ 126], 00:14:02.643 | 99.00th=[ 220], 99.50th=[ 292], 99.90th=[ 368], 99.95th=[ 368], 00:14:02.643 | 99.99th=[ 368] 00:14:02.643 bw ( KiB/s): min= 1536, max=21504, per=1.16%, avg=12623.21, stdev=6084.46, samples=19 00:14:02.643 iops : min= 12, max= 168, avg=98.53, stdev=47.49, samples=19 00:14:02.643 lat (msec) : 10=14.95%, 20=25.36%, 50=23.98%, 100=30.43%, 250=4.91% 00:14:02.643 lat (msec) : 500=0.37% 00:14:02.644 cpu : usr=0.99%, sys=0.40%, ctx=3217, majf=0, minf=1 00:14:02.644 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.644 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.644 issued rwts: total=933,960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.644 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.644 job17: (groupid=0, jobs=1): err= 0: pid=70460: Thu Jul 25 08:56:09 2024 00:14:02.644 read: IOPS=106, BW=13.3MiB/s (14.0MB/s)(120MiB/9003msec) 00:14:02.644 slat (usec): min=5, max=4123, avg=70.75, stdev=222.74 00:14:02.644 clat (usec): min=2663, max=81183, avg=12051.55, stdev=7978.02 00:14:02.644 lat (usec): min=2684, max=81202, avg=12122.31, stdev=7968.58 00:14:02.644 clat percentiles (usec): 00:14:02.644 | 1.00th=[ 4359], 5.00th=[ 5407], 10.00th=[ 6325], 20.00th=[ 7570], 00:14:02.644 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[10421], 60.00th=[11469], 00:14:02.644 | 70.00th=[12911], 80.00th=[14615], 90.00th=[17695], 95.00th=[23462], 00:14:02.644 | 99.00th=[45351], 99.50th=[78119], 99.90th=[81265], 99.95th=[81265], 00:14:02.644 | 99.99th=[81265] 00:14:02.644 write: IOPS=115, BW=14.4MiB/s (15.1MB/s)(124MiB/8590msec); 0 zone resets 00:14:02.644 slat (usec): min=31, max=13481, avg=216.17, stdev=632.44 00:14:02.644 clat (usec): min=1355, max=328370, avg=68708.96, stdev=36859.96 00:14:02.644 lat (usec): min=1411, max=328452, avg=68925.14, stdev=36911.49 00:14:02.644 clat percentiles (msec): 00:14:02.644 | 1.00th=[ 8], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 46], 00:14:02.644 | 30.00th=[ 50], 40.00th=[ 53], 50.00th=[ 56], 60.00th=[ 63], 00:14:02.644 | 70.00th=[ 72], 80.00th=[ 91], 90.00th=[ 116], 95.00th=[ 148], 00:14:02.644 | 99.00th=[ 192], 99.50th=[ 218], 99.90th=[ 330], 99.95th=[ 330], 00:14:02.644 | 99.99th=[ 330] 00:14:02.644 bw ( KiB/s): min= 3824, max=26420, per=1.15%, avg=12577.45, stdev=5643.58, samples=20 00:14:02.644 iops : min= 29, max= 206, avg=97.95, stdev=44.20, samples=20 00:14:02.644 lat (msec) : 2=0.05%, 4=0.41%, 10=23.08%, 20=24.05%, 50=17.85% 00:14:02.644 lat (msec) : 100=25.95%, 250=8.46%, 500=0.15% 00:14:02.644 cpu : usr=1.03%, sys=0.37%, ctx=3353, majf=0, minf=1 00:14:02.644 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.644 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.644 issued rwts: total=960,990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.644 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.644 job18: (groupid=0, jobs=1): err= 0: pid=70461: Thu Jul 25 08:56:09 2024 00:14:02.644 read: IOPS=92, BW=11.6MiB/s (12.1MB/s)(100MiB/8638msec) 00:14:02.644 slat (usec): min=4, max=1110, avg=67.97, stdev=133.54 00:14:02.644 clat (usec): min=2812, max=71266, avg=14243.60, stdev=9263.43 00:14:02.644 lat (usec): min=2870, max=71975, avg=14311.57, stdev=9288.37 00:14:02.644 clat percentiles (usec): 00:14:02.644 | 1.00th=[ 3687], 5.00th=[ 4080], 10.00th=[ 4686], 20.00th=[ 6259], 00:14:02.644 | 30.00th=[ 7898], 40.00th=[10290], 50.00th=[11731], 60.00th=[15008], 00:14:02.644 | 70.00th=[17957], 80.00th=[20055], 90.00th=[26084], 95.00th=[31589], 00:14:02.644 | 99.00th=[41681], 99.50th=[60031], 99.90th=[70779], 99.95th=[70779], 00:14:02.644 | 99.99th=[70779] 00:14:02.644 write: IOPS=108, BW=13.6MiB/s (14.3MB/s)(117MiB/8594msec); 0 zone resets 00:14:02.644 slat (usec): min=34, max=5479, avg=184.49, stdev=305.24 00:14:02.644 clat (msec): min=34, max=233, avg=72.70, stdev=31.44 00:14:02.644 lat (msec): min=34, max=233, avg=72.88, stdev=31.43 00:14:02.644 clat percentiles (msec): 00:14:02.644 | 1.00th=[ 38], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 50], 00:14:02.644 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 65], 60.00th=[ 71], 00:14:02.644 | 70.00th=[ 80], 80.00th=[ 89], 90.00th=[ 109], 95.00th=[ 136], 00:14:02.644 | 99.00th=[ 199], 99.50th=[ 224], 99.90th=[ 234], 99.95th=[ 234], 00:14:02.644 | 99.99th=[ 234] 00:14:02.644 bw ( KiB/s): min= 1536, max=19712, per=1.09%, avg=11885.15, stdev=4656.97, samples=20 00:14:02.644 iops : min= 12, max= 154, avg=92.70, stdev=36.36, samples=20 00:14:02.644 lat (msec) : 4=1.67%, 10=16.01%, 20=19.41%, 50=20.33%, 100=34.97% 00:14:02.644 lat (msec) : 250=7.60% 00:14:02.644 cpu : usr=0.84%, sys=0.47%, ctx=3008, majf=0, minf=1 00:14:02.644 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.644 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.644 issued rwts: total=800,936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.644 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.644 job19: (groupid=0, jobs=1): err= 0: pid=70462: Thu Jul 25 08:56:09 2024 00:14:02.644 read: IOPS=109, BW=13.6MiB/s (14.3MB/s)(120MiB/8803msec) 00:14:02.644 slat (usec): min=5, max=1709, avg=79.02, stdev=154.35 00:14:02.644 clat (usec): min=4825, max=88954, avg=16045.62, stdev=10358.17 00:14:02.644 lat (usec): min=4839, max=88982, avg=16124.65, stdev=10354.96 00:14:02.644 clat percentiles (usec): 00:14:02.644 | 1.00th=[ 6194], 5.00th=[ 7111], 10.00th=[ 8717], 20.00th=[ 9896], 00:14:02.644 | 30.00th=[10945], 40.00th=[11994], 50.00th=[13304], 60.00th=[14615], 00:14:02.644 | 70.00th=[16712], 80.00th=[19268], 90.00th=[26084], 95.00th=[31589], 00:14:02.644 | 99.00th=[71828], 99.50th=[79168], 99.90th=[88605], 99.95th=[88605], 00:14:02.644 | 99.99th=[88605] 00:14:02.644 write: IOPS=119, BW=15.0MiB/s (15.7MB/s)(121MiB/8081msec); 0 zone resets 00:14:02.644 slat (usec): min=38, max=7280, avg=201.55, stdev=451.59 00:14:02.644 clat (msec): min=34, max=328, avg=66.00, stdev=32.46 00:14:02.644 lat (msec): min=34, max=328, avg=66.20, stdev=32.47 00:14:02.644 clat percentiles (msec): 00:14:02.644 | 1.00th=[ 38], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 47], 00:14:02.644 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 61], 00:14:02.644 | 70.00th=[ 65], 80.00th=[ 75], 90.00th=[ 99], 95.00th=[ 122], 00:14:02.644 | 99.00th=[ 215], 99.50th=[ 262], 99.90th=[ 330], 99.95th=[ 330], 00:14:02.644 | 99.99th=[ 330] 00:14:02.644 bw ( KiB/s): min= 2304, max=20398, per=1.13%, avg=12283.00, stdev=5993.84, samples=20 00:14:02.644 iops : min= 18, max= 159, avg=95.80, stdev=46.79, samples=20 00:14:02.644 lat (msec) : 10=10.37%, 20=30.08%, 50=22.51%, 100=32.37%, 250=4.41% 00:14:02.644 lat (msec) : 500=0.26% 00:14:02.644 cpu : usr=0.93%, sys=0.40%, ctx=3370, majf=0, minf=5 00:14:02.644 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.644 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.644 issued rwts: total=960,968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.644 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.644 job20: (groupid=0, jobs=1): err= 0: pid=70463: Thu Jul 25 08:56:09 2024 00:14:02.644 read: IOPS=108, BW=13.5MiB/s (14.2MB/s)(117MiB/8660msec) 00:14:02.644 slat (usec): min=4, max=3243, avg=71.40, stdev=202.54 00:14:02.644 clat (usec): min=4059, max=94040, avg=13864.69, stdev=9275.49 00:14:02.644 lat (usec): min=4079, max=94063, avg=13936.09, stdev=9289.65 00:14:02.644 clat percentiles (usec): 00:14:02.644 | 1.00th=[ 5211], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 7963], 00:14:02.644 | 30.00th=[ 9241], 40.00th=[10421], 50.00th=[11731], 60.00th=[12911], 00:14:02.644 | 70.00th=[14746], 80.00th=[17171], 90.00th=[21890], 95.00th=[28443], 00:14:02.644 | 99.00th=[52691], 99.50th=[57934], 99.90th=[93848], 99.95th=[93848], 00:14:02.644 | 99.99th=[93848] 00:14:02.644 write: IOPS=114, BW=14.4MiB/s (15.1MB/s)(120MiB/8358msec); 0 zone resets 00:14:02.644 slat (usec): min=41, max=6033, avg=193.26, stdev=313.40 00:14:02.644 clat (msec): min=5, max=311, avg=68.93, stdev=34.82 00:14:02.644 lat (msec): min=5, max=311, avg=69.12, stdev=34.82 00:14:02.644 clat percentiles (msec): 00:14:02.644 | 1.00th=[ 20], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 47], 00:14:02.644 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 66], 00:14:02.644 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 101], 95.00th=[ 128], 00:14:02.644 | 99.00th=[ 243], 99.50th=[ 264], 99.90th=[ 313], 99.95th=[ 313], 00:14:02.644 | 99.99th=[ 313] 00:14:02.644 bw ( KiB/s): min= 2299, max=19712, per=1.17%, avg=12755.63, stdev=6036.45, samples=19 00:14:02.644 iops : min= 17, max= 154, avg=99.47, stdev=47.37, samples=19 00:14:02.644 lat (msec) : 10=18.20%, 20=25.47%, 50=18.09%, 100=33.12%, 250=4.75% 00:14:02.644 lat (msec) : 500=0.37% 00:14:02.644 cpu : usr=1.04%, sys=0.42%, ctx=3277, majf=0, minf=1 00:14:02.644 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.644 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.644 issued rwts: total=936,960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.644 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.644 job21: (groupid=0, jobs=1): err= 0: pid=70464: Thu Jul 25 08:56:09 2024 00:14:02.644 read: IOPS=110, BW=13.8MiB/s (14.5MB/s)(120MiB/8673msec) 00:14:02.644 slat (usec): min=4, max=1501, avg=62.52, stdev=113.58 00:14:02.644 clat (usec): min=2534, max=80783, avg=13641.94, stdev=11513.96 00:14:02.644 lat (usec): min=2546, max=80847, avg=13704.46, stdev=11513.46 00:14:02.644 clat percentiles (usec): 00:14:02.644 | 1.00th=[ 5014], 5.00th=[ 5866], 10.00th=[ 6390], 20.00th=[ 7111], 00:14:02.645 | 30.00th=[ 7767], 40.00th=[ 8455], 50.00th=[ 9634], 60.00th=[11600], 00:14:02.645 | 70.00th=[14091], 80.00th=[17171], 90.00th=[23200], 95.00th=[32375], 00:14:02.645 | 99.00th=[73925], 99.50th=[79168], 99.90th=[81265], 99.95th=[81265], 00:14:02.645 | 99.99th=[81265] 00:14:02.645 write: IOPS=116, BW=14.5MiB/s (15.2MB/s)(122MiB/8371msec); 0 zone resets 00:14:02.645 slat (usec): min=32, max=5287, avg=185.58, stdev=292.90 00:14:02.645 clat (msec): min=35, max=443, avg=68.21, stdev=39.00 00:14:02.645 lat (msec): min=35, max=443, avg=68.39, stdev=39.00 00:14:02.645 clat percentiles (msec): 00:14:02.645 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 43], 20.00th=[ 47], 00:14:02.645 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 62], 00:14:02.645 | 70.00th=[ 69], 80.00th=[ 81], 90.00th=[ 106], 95.00th=[ 128], 00:14:02.645 | 99.00th=[ 243], 99.50th=[ 388], 99.90th=[ 443], 99.95th=[ 443], 00:14:02.645 | 99.99th=[ 443] 00:14:02.645 bw ( KiB/s): min= 256, max=23040, per=1.13%, avg=12342.90, stdev=5974.33, samples=20 00:14:02.645 iops : min= 2, max= 180, avg=96.25, stdev=46.64, samples=20 00:14:02.645 lat (msec) : 4=0.21%, 10=25.57%, 20=17.24%, 50=19.88%, 100=31.57% 00:14:02.645 lat (msec) : 250=5.12%, 500=0.41% 00:14:02.645 cpu : usr=0.95%, sys=0.54%, ctx=3363, majf=0, minf=3 00:14:02.645 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.645 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.645 issued rwts: total=960,972,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.645 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.645 job22: (groupid=0, jobs=1): err= 0: pid=70465: Thu Jul 25 08:56:09 2024 00:14:02.645 read: IOPS=101, BW=12.7MiB/s (13.3MB/s)(100MiB/7866msec) 00:14:02.645 slat (usec): min=5, max=2355, avg=60.24, stdev=166.33 00:14:02.645 clat (msec): min=2, max=233, avg=14.08, stdev=27.99 00:14:02.645 lat (msec): min=2, max=233, avg=14.14, stdev=28.01 00:14:02.645 clat percentiles (msec): 00:14:02.645 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 4], 20.00th=[ 5], 00:14:02.645 | 30.00th=[ 5], 40.00th=[ 6], 50.00th=[ 7], 60.00th=[ 7], 00:14:02.645 | 70.00th=[ 9], 80.00th=[ 12], 90.00th=[ 20], 95.00th=[ 56], 00:14:02.645 | 99.00th=[ 148], 99.50th=[ 165], 99.90th=[ 234], 99.95th=[ 234], 00:14:02.645 | 99.99th=[ 234] 00:14:02.645 write: IOPS=99, BW=12.4MiB/s (13.1MB/s)(107MiB/8616msec); 0 zone resets 00:14:02.645 slat (usec): min=34, max=6330, avg=190.52, stdev=341.41 00:14:02.645 clat (msec): min=37, max=417, avg=79.53, stdev=42.92 00:14:02.645 lat (msec): min=37, max=418, avg=79.72, stdev=42.92 00:14:02.645 clat percentiles (msec): 00:14:02.645 | 1.00th=[ 40], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 51], 00:14:02.645 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 80], 00:14:02.645 | 70.00th=[ 87], 80.00th=[ 100], 90.00th=[ 115], 95.00th=[ 146], 00:14:02.645 | 99.00th=[ 251], 99.50th=[ 393], 99.90th=[ 418], 99.95th=[ 418], 00:14:02.645 | 99.99th=[ 418] 00:14:02.645 bw ( KiB/s): min= 1795, max=20736, per=1.00%, avg=10889.25, stdev=5204.54, samples=20 00:14:02.645 iops : min= 14, max= 162, avg=84.90, stdev=40.72, samples=20 00:14:02.645 lat (msec) : 4=5.19%, 10=31.42%, 20=6.88%, 50=12.06%, 100=32.99% 00:14:02.645 lat (msec) : 250=10.92%, 500=0.54% 00:14:02.645 cpu : usr=0.85%, sys=0.39%, ctx=2748, majf=0, minf=5 00:14:02.645 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.645 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.645 issued rwts: total=800,858,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.645 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.645 job23: (groupid=0, jobs=1): err= 0: pid=70466: Thu Jul 25 08:56:09 2024 00:14:02.645 read: IOPS=108, BW=13.6MiB/s (14.2MB/s)(120MiB/8834msec) 00:14:02.645 slat (usec): min=4, max=3827, avg=68.74, stdev=207.43 00:14:02.645 clat (usec): min=3402, max=38978, avg=10329.76, stdev=5239.24 00:14:02.645 lat (usec): min=3410, max=38994, avg=10398.50, stdev=5237.76 00:14:02.645 clat percentiles (usec): 00:14:02.645 | 1.00th=[ 4293], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6718], 00:14:02.645 | 30.00th=[ 7308], 40.00th=[ 7832], 50.00th=[ 8586], 60.00th=[ 9896], 00:14:02.645 | 70.00th=[11207], 80.00th=[12780], 90.00th=[16712], 95.00th=[21103], 00:14:02.645 | 99.00th=[30278], 99.50th=[34341], 99.90th=[39060], 99.95th=[39060], 00:14:02.645 | 99.99th=[39060] 00:14:02.645 write: IOPS=113, BW=14.2MiB/s (14.9MB/s)(125MiB/8794msec); 0 zone resets 00:14:02.645 slat (usec): min=37, max=6771, avg=188.24, stdev=359.65 00:14:02.645 clat (msec): min=4, max=247, avg=69.62, stdev=32.37 00:14:02.645 lat (msec): min=4, max=247, avg=69.81, stdev=32.38 00:14:02.645 clat percentiles (msec): 00:14:02.645 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 48], 00:14:02.645 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 66], 00:14:02.645 | 70.00th=[ 74], 80.00th=[ 87], 90.00th=[ 108], 95.00th=[ 134], 00:14:02.645 | 99.00th=[ 199], 99.50th=[ 236], 99.90th=[ 249], 99.95th=[ 249], 00:14:02.645 | 99.99th=[ 249] 00:14:02.645 bw ( KiB/s): min= 6387, max=21248, per=1.17%, avg=12729.95, stdev=5176.67, samples=20 00:14:02.645 iops : min= 49, max= 166, avg=99.30, stdev=40.58, samples=20 00:14:02.645 lat (msec) : 4=0.15%, 10=30.17%, 20=15.75%, 50=16.26%, 100=31.14% 00:14:02.645 lat (msec) : 250=6.52% 00:14:02.645 cpu : usr=0.95%, sys=0.47%, ctx=3390, majf=0, minf=3 00:14:02.645 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.645 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.645 issued rwts: total=960,1002,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.645 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.645 job24: (groupid=0, jobs=1): err= 0: pid=70467: Thu Jul 25 08:56:09 2024 00:14:02.645 read: IOPS=100, BW=12.5MiB/s (13.1MB/s)(100MiB/7976msec) 00:14:02.645 slat (usec): min=4, max=5389, avg=84.71, stdev=324.18 00:14:02.646 clat (usec): min=1521, max=118031, avg=14179.45, stdev=15296.35 00:14:02.646 lat (msec): min=3, max=118, avg=14.26, stdev=15.31 00:14:02.646 clat percentiles (msec): 00:14:02.646 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:14:02.646 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 11], 00:14:02.646 | 70.00th=[ 14], 80.00th=[ 20], 90.00th=[ 31], 95.00th=[ 42], 00:14:02.646 | 99.00th=[ 79], 99.50th=[ 101], 99.90th=[ 118], 99.95th=[ 118], 00:14:02.646 | 99.99th=[ 118] 00:14:02.646 write: IOPS=103, BW=12.9MiB/s (13.5MB/s)(111MiB/8614msec); 0 zone resets 00:14:02.646 slat (usec): min=42, max=3725, avg=195.13, stdev=336.37 00:14:02.646 clat (msec): min=24, max=353, avg=76.73, stdev=33.82 00:14:02.646 lat (msec): min=24, max=354, avg=76.92, stdev=33.84 00:14:02.646 clat percentiles (msec): 00:14:02.646 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 53], 00:14:02.646 | 30.00th=[ 58], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 77], 00:14:02.646 | 70.00th=[ 86], 80.00th=[ 99], 90.00th=[ 111], 95.00th=[ 128], 00:14:02.646 | 99.00th=[ 199], 99.50th=[ 305], 99.90th=[ 355], 99.95th=[ 355], 00:14:02.646 | 99.99th=[ 355] 00:14:02.646 bw ( KiB/s): min= 2299, max=18688, per=1.03%, avg=11281.60, stdev=4250.05, samples=20 00:14:02.646 iops : min= 17, max= 146, avg=87.90, stdev=33.39, samples=20 00:14:02.646 lat (msec) : 2=0.06%, 4=1.48%, 10=26.70%, 20=9.71%, 50=15.75% 00:14:02.646 lat (msec) : 100=36.29%, 250=9.71%, 500=0.30% 00:14:02.646 cpu : usr=0.88%, sys=0.35%, ctx=2989, majf=0, minf=1 00:14:02.646 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.646 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.646 issued rwts: total=800,889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.646 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.646 job25: (groupid=0, jobs=1): err= 0: pid=70468: Thu Jul 25 08:56:09 2024 00:14:02.646 read: IOPS=109, BW=13.7MiB/s (14.3MB/s)(120MiB/8770msec) 00:14:02.646 slat (usec): min=4, max=2082, avg=58.23, stdev=134.05 00:14:02.646 clat (usec): min=3562, max=71458, avg=11883.51, stdev=8132.04 00:14:02.646 lat (usec): min=3590, max=71486, avg=11941.74, stdev=8134.65 00:14:02.646 clat percentiles (usec): 00:14:02.646 | 1.00th=[ 4359], 5.00th=[ 5080], 10.00th=[ 5800], 20.00th=[ 6587], 00:14:02.646 | 30.00th=[ 7242], 40.00th=[ 8225], 50.00th=[ 9372], 60.00th=[10552], 00:14:02.646 | 70.00th=[12387], 80.00th=[15401], 90.00th=[21103], 95.00th=[27919], 00:14:02.646 | 99.00th=[44827], 99.50th=[50070], 99.90th=[71828], 99.95th=[71828], 00:14:02.646 | 99.99th=[71828] 00:14:02.646 write: IOPS=118, BW=14.8MiB/s (15.5MB/s)(127MiB/8610msec); 0 zone resets 00:14:02.646 slat (usec): min=41, max=4815, avg=167.92, stdev=269.11 00:14:02.646 clat (msec): min=16, max=238, avg=67.19, stdev=30.83 00:14:02.646 lat (msec): min=17, max=239, avg=67.35, stdev=30.85 00:14:02.646 clat percentiles (msec): 00:14:02.646 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 45], 00:14:02.646 | 30.00th=[ 50], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 64], 00:14:02.646 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 102], 95.00th=[ 117], 00:14:02.646 | 99.00th=[ 209], 99.50th=[ 228], 99.90th=[ 234], 99.95th=[ 239], 00:14:02.646 | 99.99th=[ 239] 00:14:02.646 bw ( KiB/s): min= 2043, max=23203, per=1.18%, avg=12915.30, stdev=5456.21, samples=20 00:14:02.646 iops : min= 15, max= 181, avg=100.65, stdev=42.74, samples=20 00:14:02.646 lat (msec) : 4=0.10%, 10=26.61%, 20=16.49%, 50=21.75%, 100=29.64% 00:14:02.646 lat (msec) : 250=5.41% 00:14:02.646 cpu : usr=0.97%, sys=0.43%, ctx=3322, majf=0, minf=5 00:14:02.646 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.646 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.646 issued rwts: total=960,1017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.646 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.646 job26: (groupid=0, jobs=1): err= 0: pid=70469: Thu Jul 25 08:56:09 2024 00:14:02.646 read: IOPS=107, BW=13.4MiB/s (14.1MB/s)(107MiB/7990msec) 00:14:02.646 slat (usec): min=4, max=2371, avg=55.74, stdev=130.26 00:14:02.646 clat (usec): min=2730, max=45235, avg=10832.82, stdev=7388.58 00:14:02.646 lat (usec): min=2781, max=45263, avg=10888.56, stdev=7401.37 00:14:02.646 clat percentiles (usec): 00:14:02.646 | 1.00th=[ 3720], 5.00th=[ 4424], 10.00th=[ 5014], 20.00th=[ 5866], 00:14:02.646 | 30.00th=[ 6521], 40.00th=[ 7177], 50.00th=[ 8029], 60.00th=[ 9503], 00:14:02.646 | 70.00th=[10683], 80.00th=[14746], 90.00th=[20841], 95.00th=[26870], 00:14:02.646 | 99.00th=[38536], 99.50th=[41681], 99.90th=[45351], 99.95th=[45351], 00:14:02.646 | 99.99th=[45351] 00:14:02.646 write: IOPS=108, BW=13.6MiB/s (14.3MB/s)(120MiB/8821msec); 0 zone resets 00:14:02.646 slat (usec): min=38, max=13549, avg=193.11, stdev=527.02 00:14:02.646 clat (msec): min=37, max=289, avg=72.67, stdev=33.55 00:14:02.646 lat (msec): min=37, max=289, avg=72.86, stdev=33.57 00:14:02.646 clat percentiles (msec): 00:14:02.646 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 43], 20.00th=[ 48], 00:14:02.646 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 63], 60.00th=[ 72], 00:14:02.646 | 70.00th=[ 81], 80.00th=[ 90], 90.00th=[ 116], 95.00th=[ 138], 00:14:02.646 | 99.00th=[ 192], 99.50th=[ 241], 99.90th=[ 288], 99.95th=[ 288], 00:14:02.646 | 99.99th=[ 288] 00:14:02.646 bw ( KiB/s): min= 3078, max=21760, per=1.14%, avg=12442.16, stdev=5634.88, samples=19 00:14:02.646 iops : min= 24, max= 170, avg=97.05, stdev=43.94, samples=19 00:14:02.646 lat (msec) : 4=0.94%, 10=29.99%, 20=11.06%, 50=19.26%, 100=30.88% 00:14:02.646 lat (msec) : 250=7.65%, 500=0.22% 00:14:02.646 cpu : usr=0.87%, sys=0.41%, ctx=3178, majf=0, minf=1 00:14:02.646 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.646 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.646 issued rwts: total=857,960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.646 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.646 job27: (groupid=0, jobs=1): err= 0: pid=70470: Thu Jul 25 08:56:09 2024 00:14:02.646 read: IOPS=110, BW=13.8MiB/s (14.5MB/s)(120MiB/8668msec) 00:14:02.646 slat (usec): min=4, max=1988, avg=55.65, stdev=136.16 00:14:02.646 clat (usec): min=3795, max=44945, avg=11285.29, stdev=6080.60 00:14:02.646 lat (usec): min=3988, max=45246, avg=11340.93, stdev=6078.14 00:14:02.646 clat percentiles (usec): 00:14:02.646 | 1.00th=[ 4424], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 6980], 00:14:02.646 | 30.00th=[ 7439], 40.00th=[ 8225], 50.00th=[ 9110], 60.00th=[10814], 00:14:02.646 | 70.00th=[12256], 80.00th=[15008], 90.00th=[19530], 95.00th=[24511], 00:14:02.646 | 99.00th=[31589], 99.50th=[38011], 99.90th=[44827], 99.95th=[44827], 00:14:02.646 | 99.99th=[44827] 00:14:02.646 write: IOPS=116, BW=14.6MiB/s (15.3MB/s)(127MiB/8668msec); 0 zone resets 00:14:02.646 slat (usec): min=39, max=4634, avg=164.59, stdev=324.48 00:14:02.646 clat (msec): min=35, max=340, avg=67.78, stdev=34.68 00:14:02.646 lat (msec): min=36, max=340, avg=67.95, stdev=34.70 00:14:02.646 clat percentiles (msec): 00:14:02.646 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 45], 00:14:02.646 | 30.00th=[ 50], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 62], 00:14:02.646 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 106], 95.00th=[ 123], 00:14:02.646 | 99.00th=[ 209], 99.50th=[ 262], 99.90th=[ 334], 99.95th=[ 342], 00:14:02.646 | 99.99th=[ 342] 00:14:02.646 bw ( KiB/s): min= 768, max=22784, per=1.18%, avg=12869.75, stdev=5692.45, samples=20 00:14:02.646 iops : min= 6, max= 178, avg=100.35, stdev=44.52, samples=20 00:14:02.646 lat (msec) : 4=0.10%, 10=27.61%, 20=16.46%, 50=21.53%, 100=28.12% 00:14:02.646 lat (msec) : 250=5.88%, 500=0.30% 00:14:02.646 cpu : usr=0.98%, sys=0.39%, ctx=3322, majf=0, minf=1 00:14:02.646 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.646 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.646 issued rwts: total=960,1014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.646 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.646 job28: (groupid=0, jobs=1): err= 0: pid=70471: Thu Jul 25 08:56:09 2024 00:14:02.646 read: IOPS=105, BW=13.1MiB/s (13.8MB/s)(112MiB/8505msec) 00:14:02.646 slat (usec): min=5, max=2183, avg=64.39, stdev=127.68 00:14:02.646 clat (usec): min=3660, max=81138, avg=13112.42, stdev=9395.53 00:14:02.646 lat (usec): min=3672, max=81185, avg=13176.82, stdev=9413.93 00:14:02.646 clat percentiles (usec): 00:14:02.646 | 1.00th=[ 4146], 5.00th=[ 5342], 10.00th=[ 5735], 20.00th=[ 6390], 00:14:02.646 | 30.00th=[ 7701], 40.00th=[ 8586], 50.00th=[10421], 60.00th=[12780], 00:14:02.646 | 70.00th=[14484], 80.00th=[16909], 90.00th=[24773], 95.00th=[29754], 00:14:02.646 | 99.00th=[49546], 99.50th=[69731], 99.90th=[81265], 99.95th=[81265], 00:14:02.646 | 99.99th=[81265] 00:14:02.646 write: IOPS=112, BW=14.1MiB/s (14.8MB/s)(120MiB/8518msec); 0 zone resets 00:14:02.646 slat (usec): min=38, max=20899, avg=220.19, stdev=758.95 00:14:02.646 clat (msec): min=33, max=298, avg=70.07, stdev=37.78 00:14:02.646 lat (msec): min=35, max=298, avg=70.29, stdev=37.76 00:14:02.646 clat percentiles (msec): 00:14:02.646 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 47], 00:14:02.646 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 64], 00:14:02.646 | 70.00th=[ 71], 80.00th=[ 82], 90.00th=[ 108], 95.00th=[ 157], 00:14:02.646 | 99.00th=[ 230], 99.50th=[ 243], 99.90th=[ 300], 99.95th=[ 300], 00:14:02.646 | 99.99th=[ 300] 00:14:02.646 bw ( KiB/s): min= 1795, max=22272, per=1.15%, avg=12554.89, stdev=5910.09, samples=19 00:14:02.646 iops : min= 14, max= 174, avg=98.00, stdev=46.26, samples=19 00:14:02.646 lat (msec) : 4=0.38%, 10=22.92%, 20=18.18%, 50=21.20%, 100=31.45% 00:14:02.646 lat (msec) : 250=5.72%, 500=0.16% 00:14:02.647 cpu : usr=0.92%, sys=0.47%, ctx=3234, majf=0, minf=3 00:14:02.647 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.647 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.647 issued rwts: total=894,960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.647 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.647 job29: (groupid=0, jobs=1): err= 0: pid=70472: Thu Jul 25 08:56:09 2024 00:14:02.647 read: IOPS=95, BW=12.0MiB/s (12.6MB/s)(100MiB/8350msec) 00:14:02.647 slat (usec): min=4, max=6678, avg=78.40, stdev=287.94 00:14:02.647 clat (msec): min=2, max=294, avg=19.66, stdev=30.29 00:14:02.647 lat (msec): min=2, max=295, avg=19.73, stdev=30.29 00:14:02.647 clat percentiles (msec): 00:14:02.647 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:14:02.647 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 12], 60.00th=[ 15], 00:14:02.647 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 37], 95.00th=[ 67], 00:14:02.647 | 99.00th=[ 133], 99.50th=[ 271], 99.90th=[ 296], 99.95th=[ 296], 00:14:02.647 | 99.99th=[ 296] 00:14:02.647 write: IOPS=110, BW=13.8MiB/s (14.5MB/s)(112MiB/8057msec); 0 zone resets 00:14:02.647 slat (usec): min=30, max=4326, avg=160.89, stdev=257.42 00:14:02.647 clat (msec): min=35, max=277, avg=71.58, stdev=30.73 00:14:02.647 lat (msec): min=35, max=277, avg=71.74, stdev=30.73 00:14:02.647 clat percentiles (msec): 00:14:02.647 | 1.00th=[ 39], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 50], 00:14:02.647 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 71], 00:14:02.647 | 70.00th=[ 80], 80.00th=[ 89], 90.00th=[ 108], 95.00th=[ 122], 00:14:02.647 | 99.00th=[ 207], 99.50th=[ 266], 99.90th=[ 279], 99.95th=[ 279], 00:14:02.647 | 99.99th=[ 279] 00:14:02.647 bw ( KiB/s): min= 512, max=21504, per=1.04%, avg=11320.85, stdev=5772.11, samples=20 00:14:02.647 iops : min= 4, max= 168, avg=88.25, stdev=45.08, samples=20 00:14:02.647 lat (msec) : 4=1.30%, 10=18.79%, 20=17.73%, 50=17.61%, 100=36.52% 00:14:02.647 lat (msec) : 250=7.39%, 500=0.65% 00:14:02.647 cpu : usr=0.74%, sys=0.41%, ctx=2933, majf=0, minf=5 00:14:02.647 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.647 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.647 issued rwts: total=800,892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.647 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.647 job30: (groupid=0, jobs=1): err= 0: pid=70473: Thu Jul 25 08:56:09 2024 00:14:02.647 read: IOPS=76, BW=9787KiB/s (10.0MB/s)(80.0MiB/8370msec) 00:14:02.647 slat (usec): min=5, max=1350, avg=69.82, stdev=133.71 00:14:02.647 clat (msec): min=5, max=127, avg=17.43, stdev=13.60 00:14:02.647 lat (msec): min=6, max=127, avg=17.50, stdev=13.59 00:14:02.647 clat percentiles (msec): 00:14:02.647 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:14:02.647 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 16], 00:14:02.647 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 30], 95.00th=[ 41], 00:14:02.647 | 99.00th=[ 78], 99.50th=[ 102], 99.90th=[ 128], 99.95th=[ 128], 00:14:02.647 | 99.99th=[ 128] 00:14:02.647 write: IOPS=82, BW=10.3MiB/s (10.8MB/s)(89.0MiB/8639msec); 0 zone resets 00:14:02.647 slat (usec): min=41, max=7743, avg=167.74, stdev=403.55 00:14:02.647 clat (msec): min=56, max=347, avg=96.39, stdev=45.95 00:14:02.647 lat (msec): min=56, max=347, avg=96.56, stdev=46.00 00:14:02.647 clat percentiles (msec): 00:14:02.647 | 1.00th=[ 58], 5.00th=[ 60], 10.00th=[ 62], 20.00th=[ 68], 00:14:02.647 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 80], 60.00th=[ 85], 00:14:02.647 | 70.00th=[ 97], 80.00th=[ 122], 90.00th=[ 153], 95.00th=[ 186], 00:14:02.647 | 99.00th=[ 292], 99.50th=[ 317], 99.90th=[ 347], 99.95th=[ 347], 00:14:02.647 | 99.99th=[ 347] 00:14:02.647 bw ( KiB/s): min= 1784, max=15872, per=0.83%, avg=9010.25, stdev=4717.07, samples=20 00:14:02.647 iops : min= 13, max= 124, avg=70.15, stdev=37.13, samples=20 00:14:02.647 lat (msec) : 10=10.87%, 20=27.66%, 50=6.95%, 100=39.64%, 250=13.68% 00:14:02.647 lat (msec) : 500=1.18% 00:14:02.647 cpu : usr=0.59%, sys=0.31%, ctx=2318, majf=0, minf=3 00:14:02.647 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.647 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.647 issued rwts: total=640,712,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.647 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.647 job31: (groupid=0, jobs=1): err= 0: pid=70474: Thu Jul 25 08:56:09 2024 00:14:02.647 read: IOPS=65, BW=8339KiB/s (8539kB/s)(60.0MiB/7368msec) 00:14:02.647 slat (usec): min=4, max=3447, avg=74.77, stdev=239.65 00:14:02.647 clat (msec): min=4, max=260, avg=20.67, stdev=31.54 00:14:02.647 lat (msec): min=4, max=260, avg=20.75, stdev=31.62 00:14:02.647 clat percentiles (msec): 00:14:02.647 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:14:02.647 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 16], 00:14:02.647 | 70.00th=[ 18], 80.00th=[ 23], 90.00th=[ 35], 95.00th=[ 49], 00:14:02.647 | 99.00th=[ 249], 99.50th=[ 257], 99.90th=[ 262], 99.95th=[ 262], 00:14:02.647 | 99.99th=[ 262] 00:14:02.647 write: IOPS=68, BW=8732KiB/s (8941kB/s)(75.1MiB/8810msec); 0 zone resets 00:14:02.647 slat (usec): min=30, max=5329, avg=199.55, stdev=365.08 00:14:02.647 clat (msec): min=58, max=362, avg=116.32, stdev=47.70 00:14:02.647 lat (msec): min=59, max=362, avg=116.52, stdev=47.71 00:14:02.647 clat percentiles (msec): 00:14:02.647 | 1.00th=[ 61], 5.00th=[ 66], 10.00th=[ 71], 20.00th=[ 79], 00:14:02.647 | 30.00th=[ 85], 40.00th=[ 93], 50.00th=[ 102], 60.00th=[ 114], 00:14:02.647 | 70.00th=[ 131], 80.00th=[ 150], 90.00th=[ 182], 95.00th=[ 213], 00:14:02.647 | 99.00th=[ 271], 99.50th=[ 305], 99.90th=[ 363], 99.95th=[ 363], 00:14:02.647 | 99.99th=[ 363] 00:14:02.647 bw ( KiB/s): min= 2048, max=13568, per=0.70%, avg=7598.75, stdev=3265.72, samples=20 00:14:02.647 iops : min= 16, max= 106, avg=59.15, stdev=25.56, samples=20 00:14:02.647 lat (msec) : 10=16.74%, 20=16.84%, 50=8.60%, 100=27.94%, 250=28.49% 00:14:02.647 lat (msec) : 500=1.39% 00:14:02.647 cpu : usr=0.48%, sys=0.21%, ctx=2035, majf=0, minf=3 00:14:02.647 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.647 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.647 issued rwts: total=480,601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.647 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.647 job32: (groupid=0, jobs=1): err= 0: pid=70475: Thu Jul 25 08:56:09 2024 00:14:02.647 read: IOPS=76, BW=9840KiB/s (10.1MB/s)(80.0MiB/8325msec) 00:14:02.647 slat (usec): min=4, max=2500, avg=59.49, stdev=163.93 00:14:02.647 clat (usec): min=7188, max=93864, avg=20677.40, stdev=12423.82 00:14:02.647 lat (usec): min=7276, max=93883, avg=20736.89, stdev=12431.46 00:14:02.647 clat percentiles (usec): 00:14:02.647 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[11731], 00:14:02.647 | 30.00th=[13435], 40.00th=[15139], 50.00th=[16450], 60.00th=[19006], 00:14:02.647 | 70.00th=[22938], 80.00th=[27657], 90.00th=[36963], 95.00th=[45876], 00:14:02.647 | 99.00th=[64226], 99.50th=[84411], 99.90th=[93848], 99.95th=[93848], 00:14:02.647 | 99.99th=[93848] 00:14:02.647 write: IOPS=82, BW=10.3MiB/s (10.8MB/s)(86.4MiB/8402msec); 0 zone resets 00:14:02.647 slat (usec): min=36, max=15571, avg=198.30, stdev=669.03 00:14:02.647 clat (msec): min=16, max=371, avg=96.27, stdev=38.02 00:14:02.647 lat (msec): min=20, max=371, avg=96.47, stdev=37.97 00:14:02.647 clat percentiles (msec): 00:14:02.647 | 1.00th=[ 29], 5.00th=[ 60], 10.00th=[ 64], 20.00th=[ 71], 00:14:02.647 | 30.00th=[ 75], 40.00th=[ 80], 50.00th=[ 86], 60.00th=[ 94], 00:14:02.647 | 70.00th=[ 102], 80.00th=[ 115], 90.00th=[ 148], 95.00th=[ 174], 00:14:02.647 | 99.00th=[ 228], 99.50th=[ 257], 99.90th=[ 372], 99.95th=[ 372], 00:14:02.647 | 99.99th=[ 372] 00:14:02.647 bw ( KiB/s): min= 1021, max=16128, per=0.80%, avg=8741.75, stdev=4661.26, samples=20 00:14:02.647 iops : min= 7, max= 126, avg=68.15, stdev=36.51, samples=20 00:14:02.647 lat (msec) : 10=5.48%, 20=25.02%, 50=16.60%, 100=36.14%, 250=16.45% 00:14:02.647 lat (msec) : 500=0.30% 00:14:02.647 cpu : usr=0.72%, sys=0.28%, ctx=2279, majf=0, minf=5 00:14:02.647 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.647 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.647 issued rwts: total=640,691,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.647 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.647 job33: (groupid=0, jobs=1): err= 0: pid=70476: Thu Jul 25 08:56:09 2024 00:14:02.647 read: IOPS=72, BW=9333KiB/s (9557kB/s)(80.0MiB/8777msec) 00:14:02.647 slat (usec): min=4, max=1993, avg=76.04, stdev=182.64 00:14:02.647 clat (usec): min=3486, max=77005, avg=13017.14, stdev=11910.20 00:14:02.647 lat (usec): min=3536, max=77015, avg=13093.18, stdev=11910.88 00:14:02.647 clat percentiles (usec): 00:14:02.647 | 1.00th=[ 5014], 5.00th=[ 5669], 10.00th=[ 6063], 20.00th=[ 6587], 00:14:02.647 | 30.00th=[ 7373], 40.00th=[ 8225], 50.00th=[ 9241], 60.00th=[10421], 00:14:02.647 | 70.00th=[11731], 80.00th=[14091], 90.00th=[22676], 95.00th=[40633], 00:14:02.647 | 99.00th=[70779], 99.50th=[74974], 99.90th=[77071], 99.95th=[77071], 00:14:02.647 | 99.99th=[77071] 00:14:02.648 write: IOPS=79, BW=9.96MiB/s (10.4MB/s)(90.0MiB/9031msec); 0 zone resets 00:14:02.648 slat (usec): min=36, max=13944, avg=214.24, stdev=673.38 00:14:02.648 clat (usec): min=1761, max=333557, avg=99579.25, stdev=51087.02 00:14:02.648 lat (usec): min=1875, max=333618, avg=99793.49, stdev=51074.56 00:14:02.648 clat percentiles (msec): 00:14:02.648 | 1.00th=[ 3], 5.00th=[ 22], 10.00th=[ 59], 20.00th=[ 65], 00:14:02.648 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 87], 60.00th=[ 97], 00:14:02.648 | 70.00th=[ 115], 80.00th=[ 136], 90.00th=[ 165], 95.00th=[ 199], 00:14:02.648 | 99.00th=[ 255], 99.50th=[ 296], 99.90th=[ 334], 99.95th=[ 334], 00:14:02.648 | 99.99th=[ 334] 00:14:02.648 bw ( KiB/s): min= 1024, max=21504, per=0.84%, avg=9118.10, stdev=4814.38, samples=20 00:14:02.648 iops : min= 8, max= 168, avg=70.95, stdev=37.65, samples=20 00:14:02.648 lat (msec) : 2=0.07%, 4=1.69%, 10=25.96%, 20=15.96%, 50=5.22% 00:14:02.648 lat (msec) : 100=31.40%, 250=19.12%, 500=0.59% 00:14:02.648 cpu : usr=0.81%, sys=0.21%, ctx=2242, majf=0, minf=5 00:14:02.648 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.648 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.648 issued rwts: total=640,720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.648 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.648 job34: (groupid=0, jobs=1): err= 0: pid=70477: Thu Jul 25 08:56:09 2024 00:14:02.648 read: IOPS=82, BW=10.3MiB/s (10.8MB/s)(80.0MiB/7778msec) 00:14:02.648 slat (usec): min=4, max=710, avg=62.37, stdev=95.95 00:14:02.648 clat (usec): min=6626, max=65920, avg=17301.41, stdev=9162.20 00:14:02.648 lat (usec): min=6749, max=65935, avg=17363.79, stdev=9155.44 00:14:02.648 clat percentiles (usec): 00:14:02.648 | 1.00th=[ 7111], 5.00th=[ 8160], 10.00th=[ 9372], 20.00th=[11207], 00:14:02.648 | 30.00th=[11994], 40.00th=[13566], 50.00th=[14484], 60.00th=[15795], 00:14:02.648 | 70.00th=[18220], 80.00th=[23462], 90.00th=[27657], 95.00th=[36439], 00:14:02.648 | 99.00th=[60031], 99.50th=[61080], 99.90th=[65799], 99.95th=[65799], 00:14:02.648 | 99.99th=[65799] 00:14:02.648 write: IOPS=75, BW=9661KiB/s (9893kB/s)(81.8MiB/8665msec); 0 zone resets 00:14:02.648 slat (usec): min=40, max=9424, avg=220.29, stdev=521.90 00:14:02.648 clat (msec): min=52, max=427, avg=104.59, stdev=52.72 00:14:02.648 lat (msec): min=52, max=427, avg=104.81, stdev=52.74 00:14:02.648 clat percentiles (msec): 00:14:02.648 | 1.00th=[ 58], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 69], 00:14:02.648 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 84], 60.00th=[ 91], 00:14:02.648 | 70.00th=[ 112], 80.00th=[ 138], 90.00th=[ 171], 95.00th=[ 222], 00:14:02.648 | 99.00th=[ 296], 99.50th=[ 342], 99.90th=[ 426], 99.95th=[ 426], 00:14:02.648 | 99.99th=[ 426] 00:14:02.648 bw ( KiB/s): min= 1792, max=14818, per=0.76%, avg=8280.00, stdev=4550.69, samples=20 00:14:02.648 iops : min= 14, max= 115, avg=64.60, stdev=35.46, samples=20 00:14:02.648 lat (msec) : 10=6.18%, 20=30.53%, 50=12.06%, 100=33.31%, 250=16.54% 00:14:02.648 lat (msec) : 500=1.39% 00:14:02.648 cpu : usr=0.79%, sys=0.22%, ctx=2220, majf=0, minf=7 00:14:02.648 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.648 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.648 issued rwts: total=640,654,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.648 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.648 job35: (groupid=0, jobs=1): err= 0: pid=70478: Thu Jul 25 08:56:09 2024 00:14:02.648 read: IOPS=78, BW=9.77MiB/s (10.2MB/s)(80.0MiB/8188msec) 00:14:02.648 slat (usec): min=4, max=2858, avg=69.29, stdev=178.25 00:14:02.648 clat (msec): min=7, max=149, avg=25.81, stdev=21.05 00:14:02.648 lat (msec): min=7, max=149, avg=25.88, stdev=21.06 00:14:02.648 clat percentiles (msec): 00:14:02.648 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 14], 00:14:02.648 | 30.00th=[ 15], 40.00th=[ 18], 50.00th=[ 20], 60.00th=[ 22], 00:14:02.648 | 70.00th=[ 26], 80.00th=[ 32], 90.00th=[ 42], 95.00th=[ 67], 00:14:02.648 | 99.00th=[ 126], 99.50th=[ 142], 99.90th=[ 150], 99.95th=[ 150], 00:14:02.648 | 99.99th=[ 150] 00:14:02.648 write: IOPS=83, BW=10.4MiB/s (10.9MB/s)(82.9MiB/7975msec); 0 zone resets 00:14:02.648 slat (usec): min=41, max=1812, avg=162.01, stdev=202.64 00:14:02.648 clat (msec): min=56, max=386, avg=95.16, stdev=43.74 00:14:02.648 lat (msec): min=57, max=386, avg=95.32, stdev=43.75 00:14:02.648 clat percentiles (msec): 00:14:02.648 | 1.00th=[ 58], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 68], 00:14:02.648 | 30.00th=[ 74], 40.00th=[ 78], 50.00th=[ 82], 60.00th=[ 87], 00:14:02.648 | 70.00th=[ 97], 80.00th=[ 110], 90.00th=[ 142], 95.00th=[ 188], 00:14:02.648 | 99.00th=[ 284], 99.50th=[ 347], 99.90th=[ 388], 99.95th=[ 388], 00:14:02.648 | 99.99th=[ 388] 00:14:02.648 bw ( KiB/s): min= 2304, max=16128, per=0.81%, avg=8821.79, stdev=4563.82, samples=19 00:14:02.648 iops : min= 18, max= 126, avg=68.79, stdev=35.63, samples=19 00:14:02.648 lat (msec) : 10=2.23%, 20=23.18%, 50=19.42%, 100=39.60%, 250=14.50% 00:14:02.648 lat (msec) : 500=1.07% 00:14:02.648 cpu : usr=0.70%, sys=0.27%, ctx=2241, majf=0, minf=5 00:14:02.648 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.648 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.648 issued rwts: total=640,663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.648 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.648 job36: (groupid=0, jobs=1): err= 0: pid=70479: Thu Jul 25 08:56:09 2024 00:14:02.648 read: IOPS=78, BW=9.78MiB/s (10.3MB/s)(80.0MiB/8178msec) 00:14:02.648 slat (usec): min=4, max=2436, avg=66.67, stdev=170.84 00:14:02.648 clat (usec): min=5502, max=64017, avg=18943.61, stdev=11474.09 00:14:02.648 lat (usec): min=5625, max=64025, avg=19010.28, stdev=11472.60 00:14:02.648 clat percentiles (usec): 00:14:02.648 | 1.00th=[ 5932], 5.00th=[ 7177], 10.00th=[ 8717], 20.00th=[10028], 00:14:02.648 | 30.00th=[11863], 40.00th=[13566], 50.00th=[15139], 60.00th=[16909], 00:14:02.648 | 70.00th=[21103], 80.00th=[25822], 90.00th=[37487], 95.00th=[45351], 00:14:02.648 | 99.00th=[56886], 99.50th=[57410], 99.90th=[64226], 99.95th=[64226], 00:14:02.648 | 99.99th=[64226] 00:14:02.648 write: IOPS=77, BW=9957KiB/s (10.2MB/s)(82.9MiB/8523msec); 0 zone resets 00:14:02.648 slat (usec): min=38, max=2879, avg=177.75, stdev=270.81 00:14:02.648 clat (msec): min=41, max=405, avg=101.70, stdev=49.31 00:14:02.648 lat (msec): min=41, max=405, avg=101.87, stdev=49.33 00:14:02.648 clat percentiles (msec): 00:14:02.648 | 1.00th=[ 51], 5.00th=[ 60], 10.00th=[ 64], 20.00th=[ 71], 00:14:02.648 | 30.00th=[ 74], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 93], 00:14:02.648 | 70.00th=[ 107], 80.00th=[ 123], 90.00th=[ 155], 95.00th=[ 209], 00:14:02.648 | 99.00th=[ 317], 99.50th=[ 326], 99.90th=[ 405], 99.95th=[ 405], 00:14:02.648 | 99.99th=[ 405] 00:14:02.648 bw ( KiB/s): min= 1792, max=16384, per=0.81%, avg=8824.00, stdev=4181.39, samples=19 00:14:02.648 iops : min= 14, max= 128, avg=68.84, stdev=32.60, samples=19 00:14:02.648 lat (msec) : 10=9.67%, 20=22.79%, 50=15.73%, 100=33.84%, 250=16.58% 00:14:02.648 lat (msec) : 500=1.38% 00:14:02.648 cpu : usr=0.55%, sys=0.32%, ctx=2303, majf=0, minf=5 00:14:02.648 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.648 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.648 issued rwts: total=640,663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.648 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.648 job37: (groupid=0, jobs=1): err= 0: pid=70480: Thu Jul 25 08:56:09 2024 00:14:02.648 read: IOPS=74, BW=9552KiB/s (9781kB/s)(80.0MiB/8576msec) 00:14:02.648 slat (usec): min=4, max=1748, avg=51.05, stdev=135.15 00:14:02.648 clat (usec): min=4958, max=73279, avg=11560.66, stdev=8753.25 00:14:02.648 lat (usec): min=5451, max=73295, avg=11611.72, stdev=8756.36 00:14:02.648 clat percentiles (usec): 00:14:02.648 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6259], 20.00th=[ 6849], 00:14:02.648 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8848], 60.00th=[ 9765], 00:14:02.648 | 70.00th=[11076], 80.00th=[13435], 90.00th=[17695], 95.00th=[27395], 00:14:02.648 | 99.00th=[56886], 99.50th=[63177], 99.90th=[72877], 99.95th=[72877], 00:14:02.648 | 99.99th=[72877] 00:14:02.648 write: IOPS=78, BW=9.83MiB/s (10.3MB/s)(90.0MiB/9155msec); 0 zone resets 00:14:02.648 slat (usec): min=35, max=3120, avg=176.50, stdev=299.36 00:14:02.648 clat (usec): min=1864, max=376121, avg=101033.22, stdev=51229.60 00:14:02.648 lat (usec): min=1921, max=376183, avg=101209.72, stdev=51235.97 00:14:02.648 clat percentiles (msec): 00:14:02.648 | 1.00th=[ 8], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 68], 00:14:02.648 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 94], 00:14:02.648 | 70.00th=[ 112], 80.00th=[ 136], 90.00th=[ 165], 95.00th=[ 190], 00:14:02.648 | 99.00th=[ 296], 99.50th=[ 317], 99.90th=[ 376], 99.95th=[ 376], 00:14:02.648 | 99.99th=[ 376] 00:14:02.648 bw ( KiB/s): min= 2304, max=16384, per=0.83%, avg=9105.35, stdev=4470.08, samples=20 00:14:02.648 iops : min= 18, max= 128, avg=70.95, stdev=34.89, samples=20 00:14:02.648 lat (msec) : 2=0.07%, 4=0.15%, 10=30.15%, 20=13.75%, 50=4.19% 00:14:02.648 lat (msec) : 100=33.09%, 250=17.50%, 500=1.10% 00:14:02.648 cpu : usr=0.75%, sys=0.26%, ctx=2141, majf=0, minf=5 00:14:02.648 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.648 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.648 issued rwts: total=640,720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.648 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.648 job38: (groupid=0, jobs=1): err= 0: pid=70481: Thu Jul 25 08:56:09 2024 00:14:02.648 read: IOPS=67, BW=8656KiB/s (8864kB/s)(60.0MiB/7098msec) 00:14:02.648 slat (usec): min=4, max=3263, avg=88.63, stdev=305.77 00:14:02.648 clat (msec): min=4, max=357, avg=23.64, stdev=44.62 00:14:02.648 lat (msec): min=4, max=357, avg=23.73, stdev=44.64 00:14:02.648 clat percentiles (msec): 00:14:02.649 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:14:02.649 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 12], 60.00th=[ 13], 00:14:02.649 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 39], 95.00th=[ 102], 00:14:02.649 | 99.00th=[ 292], 99.50th=[ 296], 99.90th=[ 359], 99.95th=[ 359], 00:14:02.649 | 99.99th=[ 359] 00:14:02.649 write: IOPS=68, BW=8764KiB/s (8974kB/s)(73.8MiB/8617msec); 0 zone resets 00:14:02.649 slat (usec): min=32, max=9810, avg=208.67, stdev=506.72 00:14:02.649 clat (msec): min=57, max=298, avg=116.14, stdev=47.18 00:14:02.649 lat (msec): min=57, max=298, avg=116.35, stdev=47.24 00:14:02.649 clat percentiles (msec): 00:14:02.649 | 1.00th=[ 59], 5.00th=[ 63], 10.00th=[ 68], 20.00th=[ 75], 00:14:02.649 | 30.00th=[ 83], 40.00th=[ 93], 50.00th=[ 106], 60.00th=[ 121], 00:14:02.649 | 70.00th=[ 133], 80.00th=[ 150], 90.00th=[ 178], 95.00th=[ 213], 00:14:02.649 | 99.00th=[ 271], 99.50th=[ 288], 99.90th=[ 300], 99.95th=[ 300], 00:14:02.649 | 99.99th=[ 300] 00:14:02.649 bw ( KiB/s): min= 4096, max=14818, per=0.75%, avg=8155.71, stdev=3001.09, samples=17 00:14:02.649 iops : min= 32, max= 115, avg=63.47, stdev=23.44, samples=17 00:14:02.649 lat (msec) : 10=18.04%, 20=18.69%, 50=4.67%, 100=25.79%, 250=31.50% 00:14:02.649 lat (msec) : 500=1.31% 00:14:02.649 cpu : usr=0.67%, sys=0.17%, ctx=1836, majf=0, minf=15 00:14:02.649 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.649 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.649 issued rwts: total=480,590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.649 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.649 job39: (groupid=0, jobs=1): err= 0: pid=70482: Thu Jul 25 08:56:09 2024 00:14:02.649 read: IOPS=63, BW=8085KiB/s (8279kB/s)(63.9MiB/8090msec) 00:14:02.649 slat (usec): min=4, max=6749, avg=109.77, stdev=404.77 00:14:02.649 clat (msec): min=4, max=128, avg=21.68, stdev=18.60 00:14:02.649 lat (msec): min=4, max=128, avg=21.79, stdev=18.62 00:14:02.649 clat percentiles (msec): 00:14:02.649 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 12], 00:14:02.649 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 17], 60.00th=[ 21], 00:14:02.649 | 70.00th=[ 23], 80.00th=[ 28], 90.00th=[ 36], 95.00th=[ 48], 00:14:02.649 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 129], 99.95th=[ 129], 00:14:02.649 | 99.99th=[ 129] 00:14:02.649 write: IOPS=74, BW=9500KiB/s (9728kB/s)(80.0MiB/8623msec); 0 zone resets 00:14:02.649 slat (usec): min=29, max=2490, avg=172.87, stdev=213.74 00:14:02.649 clat (msec): min=34, max=301, avg=106.68, stdev=43.56 00:14:02.649 lat (msec): min=34, max=301, avg=106.86, stdev=43.56 00:14:02.649 clat percentiles (msec): 00:14:02.649 | 1.00th=[ 41], 5.00th=[ 64], 10.00th=[ 70], 20.00th=[ 77], 00:14:02.649 | 30.00th=[ 81], 40.00th=[ 88], 50.00th=[ 94], 60.00th=[ 102], 00:14:02.649 | 70.00th=[ 116], 80.00th=[ 132], 90.00th=[ 157], 95.00th=[ 194], 00:14:02.649 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 300], 99.95th=[ 300], 00:14:02.649 | 99.99th=[ 300] 00:14:02.649 bw ( KiB/s): min= 1792, max=13796, per=0.79%, avg=8614.21, stdev=3387.63, samples=19 00:14:02.649 iops : min= 14, max= 107, avg=67.05, stdev=26.42, samples=19 00:14:02.649 lat (msec) : 10=6.86%, 20=19.64%, 50=16.68%, 100=32.41%, 250=23.20% 00:14:02.649 lat (msec) : 500=1.22% 00:14:02.649 cpu : usr=0.62%, sys=0.24%, ctx=2057, majf=0, minf=1 00:14:02.649 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.649 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.649 issued rwts: total=511,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.649 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.649 job40: (groupid=0, jobs=1): err= 0: pid=70483: Thu Jul 25 08:56:09 2024 00:14:02.649 read: IOPS=66, BW=8521KiB/s (8726kB/s)(60.0MiB/7210msec) 00:14:02.649 slat (usec): min=4, max=763, avg=50.67, stdev=94.80 00:14:02.649 clat (usec): min=3656, max=93357, avg=16006.32, stdev=16550.68 00:14:02.649 lat (usec): min=3735, max=93367, avg=16057.00, stdev=16545.18 00:14:02.649 clat percentiles (usec): 00:14:02.649 | 1.00th=[ 5407], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 8455], 00:14:02.649 | 30.00th=[ 9503], 40.00th=[10945], 50.00th=[12256], 60.00th=[12649], 00:14:02.649 | 70.00th=[14353], 80.00th=[16581], 90.00th=[21365], 95.00th=[30540], 00:14:02.649 | 99.00th=[92799], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:14:02.649 | 99.99th=[93848] 00:14:02.649 write: IOPS=66, BW=8546KiB/s (8751kB/s)(75.9MiB/9092msec); 0 zone resets 00:14:02.649 slat (usec): min=42, max=6609, avg=225.82, stdev=483.72 00:14:02.649 clat (msec): min=53, max=533, avg=119.09, stdev=53.70 00:14:02.649 lat (msec): min=53, max=533, avg=119.31, stdev=53.70 00:14:02.649 clat percentiles (msec): 00:14:02.649 | 1.00th=[ 57], 5.00th=[ 62], 10.00th=[ 68], 20.00th=[ 78], 00:14:02.649 | 30.00th=[ 88], 40.00th=[ 99], 50.00th=[ 111], 60.00th=[ 125], 00:14:02.649 | 70.00th=[ 136], 80.00th=[ 148], 90.00th=[ 178], 95.00th=[ 205], 00:14:02.649 | 99.00th=[ 317], 99.50th=[ 418], 99.90th=[ 535], 99.95th=[ 535], 00:14:02.649 | 99.99th=[ 535] 00:14:02.649 bw ( KiB/s): min= 768, max=14848, per=0.70%, avg=7662.70, stdev=3256.03, samples=20 00:14:02.649 iops : min= 6, max= 116, avg=59.65, stdev=25.47, samples=20 00:14:02.649 lat (msec) : 4=0.09%, 10=14.54%, 20=23.37%, 50=3.96%, 100=25.30% 00:14:02.649 lat (msec) : 250=32.01%, 500=0.55%, 750=0.18% 00:14:02.649 cpu : usr=0.48%, sys=0.29%, ctx=1965, majf=0, minf=7 00:14:02.649 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.649 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.649 issued rwts: total=480,607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.649 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.649 job41: (groupid=0, jobs=1): err= 0: pid=70484: Thu Jul 25 08:56:09 2024 00:14:02.649 read: IOPS=65, BW=8331KiB/s (8531kB/s)(60.0MiB/7375msec) 00:14:02.649 slat (usec): min=5, max=1430, avg=61.72, stdev=134.49 00:14:02.649 clat (msec): min=3, max=130, avg=16.98, stdev=21.03 00:14:02.649 lat (msec): min=4, max=131, avg=17.04, stdev=21.03 00:14:02.649 clat percentiles (msec): 00:14:02.649 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:14:02.649 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 12], 00:14:02.649 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 28], 95.00th=[ 72], 00:14:02.649 | 99.00th=[ 115], 99.50th=[ 124], 99.90th=[ 131], 99.95th=[ 131], 00:14:02.649 | 99.99th=[ 131] 00:14:02.649 write: IOPS=67, BW=8594KiB/s (8800kB/s)(75.8MiB/9026msec); 0 zone resets 00:14:02.649 slat (usec): min=41, max=3115, avg=191.37, stdev=254.76 00:14:02.649 clat (msec): min=55, max=336, avg=118.36, stdev=47.41 00:14:02.649 lat (msec): min=56, max=336, avg=118.55, stdev=47.41 00:14:02.649 clat percentiles (msec): 00:14:02.649 | 1.00th=[ 58], 5.00th=[ 61], 10.00th=[ 67], 20.00th=[ 78], 00:14:02.649 | 30.00th=[ 88], 40.00th=[ 96], 50.00th=[ 110], 60.00th=[ 124], 00:14:02.649 | 70.00th=[ 138], 80.00th=[ 150], 90.00th=[ 188], 95.00th=[ 215], 00:14:02.649 | 99.00th=[ 262], 99.50th=[ 264], 99.90th=[ 338], 99.95th=[ 338], 00:14:02.649 | 99.99th=[ 338] 00:14:02.649 bw ( KiB/s): min= 2048, max=12800, per=0.70%, avg=7666.10, stdev=2923.83, samples=20 00:14:02.649 iops : min= 16, max= 100, avg=59.75, stdev=22.86, samples=20 00:14:02.649 lat (msec) : 4=0.18%, 10=22.38%, 20=14.18%, 50=4.33%, 100=26.24% 00:14:02.649 lat (msec) : 250=31.86%, 500=0.83% 00:14:02.649 cpu : usr=0.54%, sys=0.27%, ctx=1983, majf=0, minf=7 00:14:02.649 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.649 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.649 issued rwts: total=480,606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.649 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.649 job42: (groupid=0, jobs=1): err= 0: pid=70485: Thu Jul 25 08:56:09 2024 00:14:02.649 read: IOPS=74, BW=9594KiB/s (9824kB/s)(80.0MiB/8539msec) 00:14:02.649 slat (usec): min=4, max=2907, avg=58.94, stdev=176.32 00:14:02.649 clat (usec): min=3873, max=89641, avg=13307.71, stdev=8930.89 00:14:02.649 lat (usec): min=3892, max=89677, avg=13366.65, stdev=8930.75 00:14:02.649 clat percentiles (usec): 00:14:02.649 | 1.00th=[ 6325], 5.00th=[ 7111], 10.00th=[ 7635], 20.00th=[ 8586], 00:14:02.649 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[11338], 60.00th=[12518], 00:14:02.649 | 70.00th=[13566], 80.00th=[15533], 90.00th=[18482], 95.00th=[26608], 00:14:02.649 | 99.00th=[64226], 99.50th=[78119], 99.90th=[89654], 99.95th=[89654], 00:14:02.649 | 99.99th=[89654] 00:14:02.649 write: IOPS=84, BW=10.5MiB/s (11.0MB/s)(94.8MiB/8996msec); 0 zone resets 00:14:02.649 slat (usec): min=31, max=14976, avg=186.09, stdev=641.23 00:14:02.649 clat (msec): min=10, max=367, avg=94.10, stdev=44.36 00:14:02.649 lat (msec): min=10, max=367, avg=94.29, stdev=44.32 00:14:02.649 clat percentiles (msec): 00:14:02.649 | 1.00th=[ 11], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 65], 00:14:02.649 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 87], 00:14:02.649 | 70.00th=[ 105], 80.00th=[ 127], 90.00th=[ 146], 95.00th=[ 180], 00:14:02.649 | 99.00th=[ 251], 99.50th=[ 279], 99.90th=[ 368], 99.95th=[ 368], 00:14:02.649 | 99.99th=[ 368] 00:14:02.649 bw ( KiB/s): min= 1795, max=16384, per=0.88%, avg=9607.00, stdev=4311.98, samples=20 00:14:02.649 iops : min= 14, max= 128, avg=74.95, stdev=33.66, samples=20 00:14:02.649 lat (msec) : 4=0.07%, 10=17.81%, 20=25.39%, 50=3.43%, 100=35.91% 00:14:02.649 lat (msec) : 250=16.88%, 500=0.50% 00:14:02.649 cpu : usr=0.67%, sys=0.23%, ctx=2390, majf=0, minf=5 00:14:02.649 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.649 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.649 issued rwts: total=640,758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.649 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.649 job43: (groupid=0, jobs=1): err= 0: pid=70486: Thu Jul 25 08:56:09 2024 00:14:02.649 read: IOPS=76, BW=9785KiB/s (10.0MB/s)(80.0MiB/8372msec) 00:14:02.649 slat (usec): min=5, max=2158, avg=88.07, stdev=207.43 00:14:02.650 clat (usec): min=7072, max=52082, avg=18741.07, stdev=9319.71 00:14:02.650 lat (usec): min=7083, max=52107, avg=18829.14, stdev=9353.53 00:14:02.650 clat percentiles (usec): 00:14:02.650 | 1.00th=[ 7832], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10814], 00:14:02.650 | 30.00th=[11731], 40.00th=[13304], 50.00th=[15664], 60.00th=[19792], 00:14:02.650 | 70.00th=[22676], 80.00th=[25560], 90.00th=[32900], 95.00th=[38011], 00:14:02.650 | 99.00th=[43254], 99.50th=[45351], 99.90th=[52167], 99.95th=[52167], 00:14:02.650 | 99.99th=[52167] 00:14:02.650 write: IOPS=86, BW=10.8MiB/s (11.4MB/s)(92.5MiB/8526msec); 0 zone resets 00:14:02.650 slat (usec): min=34, max=3960, avg=169.69, stdev=271.27 00:14:02.650 clat (msec): min=48, max=289, avg=91.30, stdev=37.93 00:14:02.650 lat (msec): min=48, max=289, avg=91.47, stdev=37.93 00:14:02.650 clat percentiles (msec): 00:14:02.650 | 1.00th=[ 56], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 66], 00:14:02.650 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 77], 60.00th=[ 82], 00:14:02.650 | 70.00th=[ 93], 80.00th=[ 115], 90.00th=[ 148], 95.00th=[ 171], 00:14:02.650 | 99.00th=[ 234], 99.50th=[ 241], 99.90th=[ 288], 99.95th=[ 288], 00:14:02.650 | 99.99th=[ 288] 00:14:02.650 bw ( KiB/s): min= 1788, max=17152, per=0.86%, avg=9379.90, stdev=4768.32, samples=20 00:14:02.650 iops : min= 13, max= 134, avg=73.05, stdev=37.52, samples=20 00:14:02.650 lat (msec) : 10=6.81%, 20=21.52%, 50=17.90%, 100=40.43%, 250=13.12% 00:14:02.650 lat (msec) : 500=0.22% 00:14:02.650 cpu : usr=0.75%, sys=0.24%, ctx=2418, majf=0, minf=1 00:14:02.650 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.650 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.650 issued rwts: total=640,740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.650 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.650 job44: (groupid=0, jobs=1): err= 0: pid=70487: Thu Jul 25 08:56:09 2024 00:14:02.650 read: IOPS=74, BW=9559KiB/s (9788kB/s)(80.0MiB/8570msec) 00:14:02.650 slat (usec): min=4, max=5045, avg=76.77, stdev=277.64 00:14:02.650 clat (usec): min=4355, max=91304, avg=17168.66, stdev=9515.97 00:14:02.650 lat (usec): min=5508, max=91343, avg=17245.44, stdev=9523.07 00:14:02.650 clat percentiles (usec): 00:14:02.650 | 1.00th=[ 8291], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[11076], 00:14:02.650 | 30.00th=[11863], 40.00th=[12518], 50.00th=[13566], 60.00th=[16909], 00:14:02.650 | 70.00th=[19530], 80.00th=[21365], 90.00th=[26346], 95.00th=[35390], 00:14:02.650 | 99.00th=[52691], 99.50th=[73925], 99.90th=[91751], 99.95th=[91751], 00:14:02.650 | 99.99th=[91751] 00:14:02.650 write: IOPS=84, BW=10.6MiB/s (11.1MB/s)(91.6MiB/8669msec); 0 zone resets 00:14:02.650 slat (usec): min=30, max=18960, avg=199.95, stdev=756.82 00:14:02.650 clat (usec): min=1246, max=377238, avg=93804.06, stdev=43869.49 00:14:02.650 lat (usec): min=1298, max=377310, avg=94004.01, stdev=43834.81 00:14:02.650 clat percentiles (msec): 00:14:02.650 | 1.00th=[ 12], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 67], 00:14:02.650 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 89], 00:14:02.650 | 70.00th=[ 97], 80.00th=[ 122], 90.00th=[ 150], 95.00th=[ 176], 00:14:02.650 | 99.00th=[ 253], 99.50th=[ 334], 99.90th=[ 376], 99.95th=[ 376], 00:14:02.650 | 99.99th=[ 376] 00:14:02.650 bw ( KiB/s): min= 1280, max=16896, per=0.90%, avg=9776.21, stdev=4558.02, samples=19 00:14:02.650 iops : min= 10, max= 132, avg=76.26, stdev=35.65, samples=19 00:14:02.650 lat (msec) : 2=0.22%, 10=5.10%, 20=30.15%, 50=12.45%, 100=36.93% 00:14:02.650 lat (msec) : 250=14.57%, 500=0.58% 00:14:02.650 cpu : usr=0.65%, sys=0.33%, ctx=2371, majf=0, minf=3 00:14:02.650 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.650 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.650 issued rwts: total=640,733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.650 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.650 job45: (groupid=0, jobs=1): err= 0: pid=70488: Thu Jul 25 08:56:09 2024 00:14:02.650 read: IOPS=79, BW=9.89MiB/s (10.4MB/s)(80.0MiB/8087msec) 00:14:02.650 slat (usec): min=4, max=1468, avg=64.06, stdev=141.29 00:14:02.650 clat (usec): min=4783, max=76380, avg=21016.25, stdev=12894.67 00:14:02.650 lat (usec): min=4939, max=76395, avg=21080.31, stdev=12895.14 00:14:02.650 clat percentiles (usec): 00:14:02.650 | 1.00th=[ 5145], 5.00th=[ 6456], 10.00th=[ 7570], 20.00th=[10290], 00:14:02.650 | 30.00th=[12911], 40.00th=[15795], 50.00th=[18220], 60.00th=[20841], 00:14:02.650 | 70.00th=[24773], 80.00th=[29492], 90.00th=[36963], 95.00th=[47449], 00:14:02.650 | 99.00th=[66323], 99.50th=[67634], 99.90th=[76022], 99.95th=[76022], 00:14:02.650 | 99.99th=[76022] 00:14:02.650 write: IOPS=80, BW=10.0MiB/s (10.5MB/s)(83.9MiB/8358msec); 0 zone resets 00:14:02.650 slat (usec): min=43, max=5191, avg=182.13, stdev=352.40 00:14:02.650 clat (msec): min=56, max=352, avg=98.57, stdev=39.50 00:14:02.650 lat (msec): min=57, max=352, avg=98.75, stdev=39.48 00:14:02.650 clat percentiles (msec): 00:14:02.650 | 1.00th=[ 58], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 70], 00:14:02.650 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 95], 00:14:02.650 | 70.00th=[ 111], 80.00th=[ 125], 90.00th=[ 144], 95.00th=[ 174], 00:14:02.650 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 355], 99.95th=[ 355], 00:14:02.650 | 99.99th=[ 355] 00:14:02.650 bw ( KiB/s): min= 1536, max=14336, per=0.82%, avg=8942.37, stdev=3933.66, samples=19 00:14:02.650 iops : min= 12, max= 112, avg=69.63, stdev=30.89, samples=19 00:14:02.650 lat (msec) : 10=9.38%, 20=17.93%, 50=19.60%, 100=34.25%, 250=18.46% 00:14:02.650 lat (msec) : 500=0.38% 00:14:02.650 cpu : usr=0.63%, sys=0.27%, ctx=2319, majf=0, minf=5 00:14:02.650 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.650 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.650 issued rwts: total=640,671,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.650 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.650 job46: (groupid=0, jobs=1): err= 0: pid=70489: Thu Jul 25 08:56:09 2024 00:14:02.650 read: IOPS=82, BW=10.3MiB/s (10.8MB/s)(80.0MiB/7786msec) 00:14:02.650 slat (usec): min=4, max=3170, avg=69.96, stdev=202.63 00:14:02.650 clat (msec): min=6, max=118, avg=20.02, stdev=14.26 00:14:02.650 lat (msec): min=6, max=118, avg=20.09, stdev=14.25 00:14:02.650 clat percentiles (msec): 00:14:02.650 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 13], 00:14:02.650 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 19], 00:14:02.650 | 70.00th=[ 21], 80.00th=[ 23], 90.00th=[ 31], 95.00th=[ 40], 00:14:02.650 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 118], 99.95th=[ 118], 00:14:02.650 | 99.99th=[ 118] 00:14:02.650 write: IOPS=80, BW=10.0MiB/s (10.5MB/s)(84.6MiB/8439msec); 0 zone resets 00:14:02.650 slat (usec): min=40, max=4135, avg=181.32, stdev=280.99 00:14:02.650 clat (msec): min=47, max=410, avg=98.69, stdev=47.46 00:14:02.650 lat (msec): min=47, max=411, avg=98.87, stdev=47.47 00:14:02.650 clat percentiles (msec): 00:14:02.650 | 1.00th=[ 57], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 66], 00:14:02.650 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 83], 60.00th=[ 93], 00:14:02.650 | 70.00th=[ 110], 80.00th=[ 127], 90.00th=[ 146], 95.00th=[ 199], 00:14:02.650 | 99.00th=[ 259], 99.50th=[ 380], 99.90th=[ 409], 99.95th=[ 409], 00:14:02.650 | 99.99th=[ 409] 00:14:02.650 bw ( KiB/s): min= 2554, max=16128, per=0.83%, avg=9010.89, stdev=4177.86, samples=19 00:14:02.650 iops : min= 19, max= 126, avg=70.21, stdev=32.83, samples=19 00:14:02.650 lat (msec) : 10=2.96%, 20=29.01%, 50=15.11%, 100=34.32%, 250=17.77% 00:14:02.650 lat (msec) : 500=0.84% 00:14:02.650 cpu : usr=0.65%, sys=0.27%, ctx=2362, majf=0, minf=1 00:14:02.650 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.650 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.650 issued rwts: total=640,677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.650 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.650 job47: (groupid=0, jobs=1): err= 0: pid=70490: Thu Jul 25 08:56:09 2024 00:14:02.650 read: IOPS=68, BW=8710KiB/s (8919kB/s)(64.9MiB/7627msec) 00:14:02.650 slat (usec): min=4, max=2859, avg=86.84, stdev=239.90 00:14:02.650 clat (usec): min=4554, max=88675, avg=19264.88, stdev=12585.16 00:14:02.650 lat (usec): min=4656, max=88690, avg=19351.72, stdev=12572.98 00:14:02.650 clat percentiles (usec): 00:14:02.650 | 1.00th=[ 7046], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[10552], 00:14:02.650 | 30.00th=[12256], 40.00th=[13566], 50.00th=[15533], 60.00th=[17695], 00:14:02.651 | 70.00th=[21365], 80.00th=[25035], 90.00th=[30540], 95.00th=[45351], 00:14:02.651 | 99.00th=[70779], 99.50th=[78119], 99.90th=[88605], 99.95th=[88605], 00:14:02.651 | 99.99th=[88605] 00:14:02.651 write: IOPS=73, BW=9371KiB/s (9596kB/s)(80.0MiB/8742msec); 0 zone resets 00:14:02.651 slat (usec): min=38, max=3203, avg=193.06, stdev=308.66 00:14:02.651 clat (msec): min=39, max=518, avg=108.36, stdev=56.40 00:14:02.651 lat (msec): min=39, max=518, avg=108.55, stdev=56.41 00:14:02.651 clat percentiles (msec): 00:14:02.651 | 1.00th=[ 47], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 67], 00:14:02.651 | 30.00th=[ 74], 40.00th=[ 79], 50.00th=[ 95], 60.00th=[ 112], 00:14:02.651 | 70.00th=[ 123], 80.00th=[ 140], 90.00th=[ 169], 95.00th=[ 205], 00:14:02.651 | 99.00th=[ 351], 99.50th=[ 405], 99.90th=[ 518], 99.95th=[ 518], 00:14:02.651 | 99.99th=[ 518] 00:14:02.651 bw ( KiB/s): min= 1788, max=15616, per=0.73%, avg=8013.00, stdev=4099.92, samples=19 00:14:02.651 iops : min= 13, max= 122, avg=62.37, stdev=32.18, samples=19 00:14:02.651 lat (msec) : 10=6.90%, 20=23.38%, 50=13.11%, 100=30.80%, 250=24.76% 00:14:02.651 lat (msec) : 500=0.95%, 750=0.09% 00:14:02.651 cpu : usr=0.65%, sys=0.26%, ctx=2053, majf=0, minf=6 00:14:02.651 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.651 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.651 issued rwts: total=519,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.651 job48: (groupid=0, jobs=1): err= 0: pid=70491: Thu Jul 25 08:56:09 2024 00:14:02.651 read: IOPS=77, BW=9891KiB/s (10.1MB/s)(80.0MiB/8282msec) 00:14:02.651 slat (usec): min=5, max=2569, avg=63.51, stdev=154.35 00:14:02.651 clat (msec): min=5, max=105, avg=19.47, stdev=12.91 00:14:02.651 lat (msec): min=5, max=105, avg=19.53, stdev=12.91 00:14:02.651 clat percentiles (msec): 00:14:02.651 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 12], 00:14:02.651 | 30.00th=[ 13], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 20], 00:14:02.651 | 70.00th=[ 22], 80.00th=[ 24], 90.00th=[ 32], 95.00th=[ 42], 00:14:02.651 | 99.00th=[ 93], 99.50th=[ 104], 99.90th=[ 106], 99.95th=[ 106], 00:14:02.651 | 99.99th=[ 106] 00:14:02.651 write: IOPS=82, BW=10.3MiB/s (10.8MB/s)(87.9MiB/8492msec); 0 zone resets 00:14:02.651 slat (usec): min=40, max=5796, avg=197.52, stdev=380.40 00:14:02.651 clat (msec): min=20, max=387, avg=95.58, stdev=45.48 00:14:02.651 lat (msec): min=21, max=387, avg=95.78, stdev=45.47 00:14:02.651 clat percentiles (msec): 00:14:02.651 | 1.00th=[ 29], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 68], 00:14:02.651 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 80], 60.00th=[ 87], 00:14:02.651 | 70.00th=[ 99], 80.00th=[ 115], 90.00th=[ 146], 95.00th=[ 178], 00:14:02.651 | 99.00th=[ 284], 99.50th=[ 305], 99.90th=[ 388], 99.95th=[ 388], 00:14:02.651 | 99.99th=[ 388] 00:14:02.651 bw ( KiB/s): min= 1280, max=15073, per=0.81%, avg=8892.95, stdev=4703.09, samples=20 00:14:02.651 iops : min= 10, max= 117, avg=69.40, stdev=36.64, samples=20 00:14:02.651 lat (msec) : 10=7.82%, 20=21.74%, 50=17.80%, 100=37.75%, 250=14.00% 00:14:02.651 lat (msec) : 500=0.89% 00:14:02.651 cpu : usr=0.69%, sys=0.34%, ctx=2303, majf=0, minf=5 00:14:02.651 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.651 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.651 issued rwts: total=640,703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.651 job49: (groupid=0, jobs=1): err= 0: pid=70492: Thu Jul 25 08:56:09 2024 00:14:02.651 read: IOPS=77, BW=9926KiB/s (10.2MB/s)(80.0MiB/8253msec) 00:14:02.651 slat (usec): min=5, max=1474, avg=70.87, stdev=137.96 00:14:02.651 clat (usec): min=8332, max=73872, avg=18327.27, stdev=9273.56 00:14:02.651 lat (usec): min=8451, max=73892, avg=18398.14, stdev=9271.57 00:14:02.651 clat percentiles (usec): 00:14:02.651 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[10683], 20.00th=[11600], 00:14:02.651 | 30.00th=[12649], 40.00th=[14222], 50.00th=[16057], 60.00th=[17695], 00:14:02.651 | 70.00th=[20055], 80.00th=[22676], 90.00th=[28705], 95.00th=[36963], 00:14:02.651 | 99.00th=[59507], 99.50th=[64750], 99.90th=[73925], 99.95th=[73925], 00:14:02.651 | 99.99th=[73925] 00:14:02.651 write: IOPS=84, BW=10.5MiB/s (11.0MB/s)(90.4MiB/8581msec); 0 zone resets 00:14:02.651 slat (usec): min=35, max=6965, avg=195.58, stdev=395.22 00:14:02.651 clat (msec): min=31, max=287, avg=93.91, stdev=43.23 00:14:02.651 lat (msec): min=31, max=287, avg=94.11, stdev=43.22 00:14:02.651 clat percentiles (msec): 00:14:02.651 | 1.00th=[ 39], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 66], 00:14:02.651 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 77], 60.00th=[ 83], 00:14:02.651 | 70.00th=[ 95], 80.00th=[ 116], 90.00th=[ 153], 95.00th=[ 194], 00:14:02.651 | 99.00th=[ 255], 99.50th=[ 268], 99.90th=[ 288], 99.95th=[ 288], 00:14:02.651 | 99.99th=[ 288] 00:14:02.651 bw ( KiB/s): min= 2048, max=15872, per=0.88%, avg=9645.26, stdev=4745.90, samples=19 00:14:02.651 iops : min= 16, max= 124, avg=75.26, stdev=36.97, samples=19 00:14:02.651 lat (msec) : 10=1.76%, 20=31.33%, 50=13.50%, 100=39.10%, 250=13.65% 00:14:02.651 lat (msec) : 500=0.66% 00:14:02.651 cpu : usr=0.62%, sys=0.33%, ctx=2442, majf=0, minf=1 00:14:02.651 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.651 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.651 issued rwts: total=640,723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.651 job50: (groupid=0, jobs=1): err= 0: pid=70493: Thu Jul 25 08:56:09 2024 00:14:02.651 read: IOPS=108, BW=13.6MiB/s (14.3MB/s)(120MiB/8812msec) 00:14:02.651 slat (usec): min=4, max=1422, avg=62.37, stdev=134.09 00:14:02.651 clat (usec): min=4367, max=90282, avg=14292.81, stdev=9815.53 00:14:02.651 lat (usec): min=4487, max=90294, avg=14355.18, stdev=9807.09 00:14:02.651 clat percentiles (usec): 00:14:02.651 | 1.00th=[ 5276], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7832], 00:14:02.651 | 30.00th=[ 9241], 40.00th=[10683], 50.00th=[11600], 60.00th=[12649], 00:14:02.651 | 70.00th=[15139], 80.00th=[18482], 90.00th=[22938], 95.00th=[33162], 00:14:02.651 | 99.00th=[48497], 99.50th=[76022], 99.90th=[90702], 99.95th=[90702], 00:14:02.651 | 99.99th=[90702] 00:14:02.651 write: IOPS=116, BW=14.6MiB/s (15.3MB/s)(121MiB/8302msec); 0 zone resets 00:14:02.651 slat (usec): min=41, max=5820, avg=203.65, stdev=389.08 00:14:02.651 clat (msec): min=7, max=232, avg=67.47, stdev=31.78 00:14:02.651 lat (msec): min=7, max=232, avg=67.68, stdev=31.81 00:14:02.651 clat percentiles (msec): 00:14:02.651 | 1.00th=[ 40], 5.00th=[ 43], 10.00th=[ 45], 20.00th=[ 48], 00:14:02.651 | 30.00th=[ 52], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 61], 00:14:02.651 | 70.00th=[ 69], 80.00th=[ 80], 90.00th=[ 108], 95.00th=[ 138], 00:14:02.651 | 99.00th=[ 197], 99.50th=[ 207], 99.90th=[ 232], 99.95th=[ 232], 00:14:02.651 | 99.99th=[ 232] 00:14:02.651 bw ( KiB/s): min= 3584, max=20224, per=1.13%, avg=12337.45, stdev=5797.98, samples=20 00:14:02.651 iops : min= 28, max= 158, avg=96.30, stdev=45.22, samples=20 00:14:02.651 lat (msec) : 10=17.92%, 20=24.39%, 50=20.46%, 100=31.43%, 250=5.80% 00:14:02.651 cpu : usr=0.99%, sys=0.39%, ctx=3272, majf=0, minf=1 00:14:02.651 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.651 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.651 issued rwts: total=960,971,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.651 job51: (groupid=0, jobs=1): err= 0: pid=70494: Thu Jul 25 08:56:09 2024 00:14:02.651 read: IOPS=107, BW=13.4MiB/s (14.1MB/s)(120MiB/8954msec) 00:14:02.651 slat (usec): min=5, max=5688, avg=68.58, stdev=265.00 00:14:02.651 clat (msec): min=2, max=228, avg=15.69, stdev=21.57 00:14:02.651 lat (msec): min=3, max=228, avg=15.76, stdev=21.59 00:14:02.651 clat percentiles (msec): 00:14:02.651 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 9], 00:14:02.651 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 14], 00:14:02.651 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 22], 95.00th=[ 30], 00:14:02.651 | 99.00th=[ 114], 99.50th=[ 224], 99.90th=[ 230], 99.95th=[ 230], 00:14:02.651 | 99.99th=[ 230] 00:14:02.651 write: IOPS=122, BW=15.3MiB/s (16.1MB/s)(125MiB/8149msec); 0 zone resets 00:14:02.651 slat (usec): min=40, max=12182, avg=206.26, stdev=475.02 00:14:02.651 clat (usec): min=1387, max=278312, avg=64508.87, stdev=30005.03 00:14:02.651 lat (usec): min=1450, max=278744, avg=64715.14, stdev=30040.91 00:14:02.651 clat percentiles (msec): 00:14:02.651 | 1.00th=[ 6], 5.00th=[ 41], 10.00th=[ 43], 20.00th=[ 47], 00:14:02.651 | 30.00th=[ 50], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 62], 00:14:02.651 | 70.00th=[ 68], 80.00th=[ 81], 90.00th=[ 104], 95.00th=[ 121], 00:14:02.651 | 99.00th=[ 167], 99.50th=[ 211], 99.90th=[ 279], 99.95th=[ 279], 00:14:02.651 | 99.99th=[ 279] 00:14:02.651 bw ( KiB/s): min= 256, max=23411, per=1.16%, avg=12684.35, stdev=6267.89, samples=20 00:14:02.651 iops : min= 2, max= 182, avg=98.85, stdev=48.92, samples=20 00:14:02.651 lat (msec) : 2=0.10%, 4=0.61%, 10=17.76%, 20=25.61%, 50=20.20% 00:14:02.651 lat (msec) : 100=29.44%, 250=6.17%, 500=0.10% 00:14:02.651 cpu : usr=1.06%, sys=0.54%, ctx=3213, majf=0, minf=3 00:14:02.651 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.651 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.651 issued rwts: total=960,1000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.651 job52: (groupid=0, jobs=1): err= 0: pid=70495: Thu Jul 25 08:56:09 2024 00:14:02.652 read: IOPS=112, BW=14.0MiB/s (14.7MB/s)(120MiB/8551msec) 00:14:02.652 slat (usec): min=5, max=2352, avg=66.55, stdev=157.75 00:14:02.652 clat (usec): min=4001, max=59418, avg=13440.70, stdev=6970.59 00:14:02.652 lat (usec): min=4017, max=59430, avg=13507.25, stdev=6992.57 00:14:02.652 clat percentiles (usec): 00:14:02.652 | 1.00th=[ 4359], 5.00th=[ 4883], 10.00th=[ 6325], 20.00th=[ 8160], 00:14:02.652 | 30.00th=[ 9241], 40.00th=[10552], 50.00th=[11731], 60.00th=[12911], 00:14:02.652 | 70.00th=[15270], 80.00th=[18482], 90.00th=[22414], 95.00th=[26608], 00:14:02.652 | 99.00th=[35914], 99.50th=[44827], 99.90th=[59507], 99.95th=[59507], 00:14:02.652 | 99.99th=[59507] 00:14:02.652 write: IOPS=121, BW=15.2MiB/s (15.9MB/s)(128MiB/8391msec); 0 zone resets 00:14:02.652 slat (usec): min=30, max=4208, avg=159.12, stdev=259.62 00:14:02.652 clat (msec): min=36, max=266, avg=65.05, stdev=29.01 00:14:02.652 lat (msec): min=36, max=266, avg=65.21, stdev=29.03 00:14:02.652 clat percentiles (msec): 00:14:02.652 | 1.00th=[ 40], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 47], 00:14:02.652 | 30.00th=[ 50], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 61], 00:14:02.652 | 70.00th=[ 68], 80.00th=[ 80], 90.00th=[ 93], 95.00th=[ 108], 00:14:02.652 | 99.00th=[ 194], 99.50th=[ 230], 99.90th=[ 253], 99.95th=[ 266], 00:14:02.652 | 99.99th=[ 266] 00:14:02.652 bw ( KiB/s): min= 1024, max=20992, per=1.19%, avg=12960.55, stdev=5862.97, samples=20 00:14:02.652 iops : min= 8, max= 164, avg=101.10, stdev=45.82, samples=20 00:14:02.652 lat (msec) : 10=18.13%, 20=22.53%, 50=23.69%, 100=31.77%, 250=3.79% 00:14:02.652 lat (msec) : 500=0.10% 00:14:02.652 cpu : usr=0.90%, sys=0.37%, ctx=3308, majf=0, minf=3 00:14:02.652 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.652 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.652 issued rwts: total=960,1020,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.652 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.652 job53: (groupid=0, jobs=1): err= 0: pid=70496: Thu Jul 25 08:56:09 2024 00:14:02.652 read: IOPS=111, BW=13.9MiB/s (14.6MB/s)(120MiB/8629msec) 00:14:02.652 slat (usec): min=4, max=2991, avg=69.05, stdev=197.33 00:14:02.652 clat (usec): min=4580, max=60596, avg=15158.89, stdev=7632.88 00:14:02.652 lat (usec): min=4750, max=60811, avg=15227.94, stdev=7645.41 00:14:02.652 clat percentiles (usec): 00:14:02.652 | 1.00th=[ 6259], 5.00th=[ 7308], 10.00th=[ 8094], 20.00th=[ 9372], 00:14:02.652 | 30.00th=[10683], 40.00th=[11731], 50.00th=[12780], 60.00th=[15139], 00:14:02.652 | 70.00th=[17433], 80.00th=[19268], 90.00th=[22938], 95.00th=[29492], 00:14:02.652 | 99.00th=[42206], 99.50th=[54789], 99.90th=[60556], 99.95th=[60556], 00:14:02.652 | 99.99th=[60556] 00:14:02.652 write: IOPS=119, BW=15.0MiB/s (15.7MB/s)(123MiB/8199msec); 0 zone resets 00:14:02.652 slat (usec): min=34, max=7199, avg=186.93, stdev=431.35 00:14:02.652 clat (msec): min=38, max=265, avg=65.84, stdev=27.94 00:14:02.652 lat (msec): min=38, max=265, avg=66.03, stdev=27.95 00:14:02.652 clat percentiles (msec): 00:14:02.652 | 1.00th=[ 40], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 47], 00:14:02.652 | 30.00th=[ 52], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 62], 00:14:02.652 | 70.00th=[ 68], 80.00th=[ 79], 90.00th=[ 100], 95.00th=[ 117], 00:14:02.652 | 99.00th=[ 192], 99.50th=[ 220], 99.90th=[ 266], 99.95th=[ 266], 00:14:02.652 | 99.99th=[ 266] 00:14:02.652 bw ( KiB/s): min= 1792, max=19968, per=1.14%, avg=12488.45, stdev=5760.34, samples=20 00:14:02.652 iops : min= 14, max= 156, avg=97.40, stdev=44.97, samples=20 00:14:02.652 lat (msec) : 10=12.20%, 20=28.46%, 50=22.49%, 100=31.96%, 250=4.84% 00:14:02.652 lat (msec) : 500=0.05% 00:14:02.652 cpu : usr=0.95%, sys=0.39%, ctx=3239, majf=0, minf=5 00:14:02.652 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.652 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.652 issued rwts: total=960,983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.652 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.652 job54: (groupid=0, jobs=1): err= 0: pid=70497: Thu Jul 25 08:56:09 2024 00:14:02.652 read: IOPS=96, BW=12.1MiB/s (12.7MB/s)(100MiB/8277msec) 00:14:02.652 slat (usec): min=5, max=4922, avg=72.14, stdev=250.48 00:14:02.652 clat (usec): min=3288, max=97259, avg=12288.77, stdev=10977.77 00:14:02.652 lat (usec): min=3348, max=97271, avg=12360.90, stdev=10983.02 00:14:02.652 clat percentiles (usec): 00:14:02.652 | 1.00th=[ 3687], 5.00th=[ 4948], 10.00th=[ 5538], 20.00th=[ 6325], 00:14:02.652 | 30.00th=[ 7046], 40.00th=[ 7898], 50.00th=[ 8848], 60.00th=[10028], 00:14:02.652 | 70.00th=[12256], 80.00th=[15008], 90.00th=[21890], 95.00th=[32375], 00:14:02.652 | 99.00th=[52691], 99.50th=[87557], 99.90th=[96994], 99.95th=[96994], 00:14:02.652 | 99.99th=[96994] 00:14:02.652 write: IOPS=103, BW=12.9MiB/s (13.5MB/s)(113MiB/8785msec); 0 zone resets 00:14:02.652 slat (usec): min=30, max=4821, avg=181.45, stdev=389.51 00:14:02.652 clat (msec): min=36, max=472, avg=76.88, stdev=45.35 00:14:02.652 lat (msec): min=39, max=472, avg=77.06, stdev=45.34 00:14:02.652 clat percentiles (msec): 00:14:02.652 | 1.00th=[ 41], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 50], 00:14:02.652 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 72], 00:14:02.652 | 70.00th=[ 82], 80.00th=[ 97], 90.00th=[ 120], 95.00th=[ 140], 00:14:02.652 | 99.00th=[ 241], 99.50th=[ 430], 99.90th=[ 472], 99.95th=[ 472], 00:14:02.652 | 99.99th=[ 472] 00:14:02.652 bw ( KiB/s): min= 510, max=20736, per=1.05%, avg=11495.25, stdev=5198.27, samples=20 00:14:02.652 iops : min= 3, max= 162, avg=89.65, stdev=40.69, samples=20 00:14:02.652 lat (msec) : 4=0.76%, 10=27.20%, 20=13.31%, 50=16.65%, 100=32.42% 00:14:02.652 lat (msec) : 250=9.20%, 500=0.47% 00:14:02.652 cpu : usr=0.88%, sys=0.27%, ctx=2920, majf=0, minf=3 00:14:02.652 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.652 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.652 issued rwts: total=800,906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.652 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.652 job55: (groupid=0, jobs=1): err= 0: pid=70498: Thu Jul 25 08:56:09 2024 00:14:02.652 read: IOPS=111, BW=14.0MiB/s (14.7MB/s)(120MiB/8572msec) 00:14:02.652 slat (usec): min=5, max=2095, avg=69.82, stdev=189.69 00:14:02.652 clat (usec): min=4764, max=64993, avg=14171.38, stdev=8869.80 00:14:02.652 lat (usec): min=4793, max=65013, avg=14241.20, stdev=8873.42 00:14:02.652 clat percentiles (usec): 00:14:02.652 | 1.00th=[ 5473], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 8094], 00:14:02.652 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[11338], 60.00th=[12911], 00:14:02.652 | 70.00th=[15664], 80.00th=[18482], 90.00th=[22938], 95.00th=[31589], 00:14:02.652 | 99.00th=[53216], 99.50th=[55837], 99.90th=[64750], 99.95th=[64750], 00:14:02.652 | 99.99th=[64750] 00:14:02.652 write: IOPS=116, BW=14.5MiB/s (15.2MB/s)(121MiB/8338msec); 0 zone resets 00:14:02.652 slat (usec): min=41, max=6011, avg=183.54, stdev=356.15 00:14:02.652 clat (msec): min=30, max=271, avg=67.95, stdev=31.48 00:14:02.652 lat (msec): min=30, max=271, avg=68.14, stdev=31.51 00:14:02.652 clat percentiles (msec): 00:14:02.652 | 1.00th=[ 39], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 48], 00:14:02.652 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 63], 00:14:02.652 | 70.00th=[ 70], 80.00th=[ 82], 90.00th=[ 106], 95.00th=[ 132], 00:14:02.652 | 99.00th=[ 199], 99.50th=[ 236], 99.90th=[ 271], 99.95th=[ 271], 00:14:02.652 | 99.99th=[ 271] 00:14:02.652 bw ( KiB/s): min= 4096, max=20992, per=1.18%, avg=12925.95, stdev=5131.07, samples=19 00:14:02.652 iops : min= 32, max= 164, avg=100.79, stdev=39.99, samples=19 00:14:02.652 lat (msec) : 10=20.38%, 20=21.84%, 50=21.06%, 100=30.96%, 250=5.65% 00:14:02.652 lat (msec) : 500=0.10% 00:14:02.652 cpu : usr=0.96%, sys=0.35%, ctx=3204, majf=0, minf=5 00:14:02.652 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.652 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.652 issued rwts: total=960,968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.652 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.652 job56: (groupid=0, jobs=1): err= 0: pid=70499: Thu Jul 25 08:56:09 2024 00:14:02.652 read: IOPS=111, BW=13.9MiB/s (14.6MB/s)(120MiB/8615msec) 00:14:02.652 slat (usec): min=5, max=1806, avg=63.62, stdev=151.54 00:14:02.652 clat (msec): min=4, max=129, avg=15.26, stdev=11.47 00:14:02.652 lat (msec): min=4, max=129, avg=15.32, stdev=11.47 00:14:02.652 clat percentiles (msec): 00:14:02.652 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:14:02.652 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:14:02.652 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 24], 95.00th=[ 27], 00:14:02.652 | 99.00th=[ 40], 99.50th=[ 122], 99.90th=[ 130], 99.95th=[ 130], 00:14:02.652 | 99.99th=[ 130] 00:14:02.652 write: IOPS=120, BW=15.1MiB/s (15.8MB/s)(123MiB/8187msec); 0 zone resets 00:14:02.652 slat (usec): min=31, max=4932, avg=192.65, stdev=345.59 00:14:02.652 clat (msec): min=37, max=250, avg=65.46, stdev=27.52 00:14:02.652 lat (msec): min=38, max=251, avg=65.65, stdev=27.51 00:14:02.652 clat percentiles (msec): 00:14:02.652 | 1.00th=[ 40], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 48], 00:14:02.652 | 30.00th=[ 51], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 63], 00:14:02.652 | 70.00th=[ 70], 80.00th=[ 79], 90.00th=[ 92], 95.00th=[ 114], 00:14:02.652 | 99.00th=[ 190], 99.50th=[ 239], 99.90th=[ 251], 99.95th=[ 251], 00:14:02.652 | 99.99th=[ 251] 00:14:02.652 bw ( KiB/s): min= 2048, max=20992, per=1.15%, avg=12538.15, stdev=5519.64, samples=20 00:14:02.652 iops : min= 16, max= 164, avg=97.80, stdev=43.07, samples=20 00:14:02.652 lat (msec) : 10=12.99%, 20=27.07%, 50=23.47%, 100=32.31%, 250=4.06% 00:14:02.652 lat (msec) : 500=0.10% 00:14:02.652 cpu : usr=1.02%, sys=0.38%, ctx=3321, majf=0, minf=3 00:14:02.652 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.652 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.653 issued rwts: total=960,987,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.653 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.653 job57: (groupid=0, jobs=1): err= 0: pid=70500: Thu Jul 25 08:56:09 2024 00:14:02.653 read: IOPS=97, BW=12.2MiB/s (12.8MB/s)(101MiB/8327msec) 00:14:02.653 slat (usec): min=5, max=5724, avg=79.17, stdev=306.79 00:14:02.653 clat (usec): min=2150, max=48007, avg=12492.42, stdev=6495.02 00:14:02.653 lat (usec): min=3555, max=48019, avg=12571.60, stdev=6476.67 00:14:02.653 clat percentiles (usec): 00:14:02.653 | 1.00th=[ 4047], 5.00th=[ 5014], 10.00th=[ 6259], 20.00th=[ 7373], 00:14:02.653 | 30.00th=[ 8455], 40.00th=[ 9765], 50.00th=[11338], 60.00th=[12518], 00:14:02.653 | 70.00th=[14353], 80.00th=[16581], 90.00th=[20055], 95.00th=[23200], 00:14:02.653 | 99.00th=[34866], 99.50th=[42730], 99.90th=[47973], 99.95th=[47973], 00:14:02.653 | 99.99th=[47973] 00:14:02.653 write: IOPS=110, BW=13.8MiB/s (14.5MB/s)(120MiB/8697msec); 0 zone resets 00:14:02.653 slat (usec): min=34, max=5393, avg=198.50, stdev=409.03 00:14:02.653 clat (msec): min=14, max=342, avg=71.58, stdev=37.23 00:14:02.653 lat (msec): min=14, max=342, avg=71.78, stdev=37.26 00:14:02.653 clat percentiles (msec): 00:14:02.653 | 1.00th=[ 39], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 47], 00:14:02.653 | 30.00th=[ 52], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 65], 00:14:02.653 | 70.00th=[ 75], 80.00th=[ 91], 90.00th=[ 117], 95.00th=[ 148], 00:14:02.653 | 99.00th=[ 205], 99.50th=[ 241], 99.90th=[ 342], 99.95th=[ 342], 00:14:02.653 | 99.99th=[ 342] 00:14:02.653 bw ( KiB/s): min= 2304, max=19968, per=1.13%, avg=12310.37, stdev=5260.45, samples=19 00:14:02.653 iops : min= 18, max= 156, avg=96.05, stdev=41.06, samples=19 00:14:02.653 lat (msec) : 4=0.40%, 10=18.97%, 20=22.08%, 50=19.03%, 100=31.79% 00:14:02.653 lat (msec) : 250=7.51%, 500=0.23% 00:14:02.653 cpu : usr=0.84%, sys=0.44%, ctx=3111, majf=0, minf=5 00:14:02.653 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.653 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.653 issued rwts: total=811,960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.653 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.653 job58: (groupid=0, jobs=1): err= 0: pid=70502: Thu Jul 25 08:56:09 2024 00:14:02.653 read: IOPS=99, BW=12.4MiB/s (13.0MB/s)(100MiB/8064msec) 00:14:02.653 slat (usec): min=4, max=1967, avg=68.27, stdev=168.70 00:14:02.653 clat (usec): min=2486, max=36078, avg=10389.99, stdev=6507.28 00:14:02.653 lat (usec): min=2562, max=36659, avg=10458.26, stdev=6533.45 00:14:02.653 clat percentiles (usec): 00:14:02.653 | 1.00th=[ 3392], 5.00th=[ 3916], 10.00th=[ 4424], 20.00th=[ 5800], 00:14:02.653 | 30.00th=[ 6521], 40.00th=[ 7373], 50.00th=[ 8094], 60.00th=[ 9634], 00:14:02.653 | 70.00th=[11469], 80.00th=[13960], 90.00th=[18744], 95.00th=[26608], 00:14:02.653 | 99.00th=[32637], 99.50th=[33162], 99.90th=[35914], 99.95th=[35914], 00:14:02.653 | 99.99th=[35914] 00:14:02.653 write: IOPS=96, BW=12.1MiB/s (12.7MB/s)(109MiB/8970msec); 0 zone resets 00:14:02.653 slat (usec): min=36, max=6491, avg=198.33, stdev=433.95 00:14:02.653 clat (msec): min=38, max=279, avg=82.09, stdev=37.45 00:14:02.653 lat (msec): min=38, max=279, avg=82.29, stdev=37.47 00:14:02.653 clat percentiles (msec): 00:14:02.653 | 1.00th=[ 41], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 52], 00:14:02.653 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 73], 60.00th=[ 82], 00:14:02.653 | 70.00th=[ 95], 80.00th=[ 108], 90.00th=[ 127], 95.00th=[ 148], 00:14:02.653 | 99.00th=[ 232], 99.50th=[ 253], 99.90th=[ 279], 99.95th=[ 279], 00:14:02.653 | 99.99th=[ 279] 00:14:02.653 bw ( KiB/s): min= 3576, max=17884, per=1.01%, avg=11018.80, stdev=4304.92, samples=20 00:14:02.653 iops : min= 27, max= 139, avg=85.95, stdev=33.65, samples=20 00:14:02.653 lat (msec) : 4=2.82%, 10=26.80%, 20=13.85%, 50=13.25%, 100=29.62% 00:14:02.653 lat (msec) : 250=13.31%, 500=0.36% 00:14:02.653 cpu : usr=0.74%, sys=0.47%, ctx=2825, majf=0, minf=7 00:14:02.653 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.653 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.653 issued rwts: total=800,868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.653 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.653 job59: (groupid=0, jobs=1): err= 0: pid=70503: Thu Jul 25 08:56:09 2024 00:14:02.653 read: IOPS=99, BW=12.4MiB/s (13.0MB/s)(100MiB/8080msec) 00:14:02.653 slat (usec): min=4, max=3359, avg=76.56, stdev=220.42 00:14:02.653 clat (msec): min=2, max=108, avg=13.74, stdev=16.34 00:14:02.653 lat (msec): min=2, max=108, avg=13.82, stdev=16.36 00:14:02.653 clat percentiles (msec): 00:14:02.653 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:14:02.653 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:14:02.653 | 70.00th=[ 12], 80.00th=[ 15], 90.00th=[ 26], 95.00th=[ 50], 00:14:02.653 | 99.00th=[ 84], 99.50th=[ 104], 99.90th=[ 109], 99.95th=[ 109], 00:14:02.653 | 99.99th=[ 109] 00:14:02.653 write: IOPS=99, BW=12.4MiB/s (13.0MB/s)(107MiB/8650msec); 0 zone resets 00:14:02.653 slat (usec): min=28, max=4795, avg=202.00, stdev=396.58 00:14:02.653 clat (msec): min=22, max=291, avg=80.09, stdev=39.17 00:14:02.653 lat (msec): min=22, max=291, avg=80.30, stdev=39.16 00:14:02.653 clat percentiles (msec): 00:14:02.653 | 1.00th=[ 39], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 51], 00:14:02.653 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 79], 00:14:02.653 | 70.00th=[ 91], 80.00th=[ 104], 90.00th=[ 126], 95.00th=[ 153], 00:14:02.653 | 99.00th=[ 257], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 292], 00:14:02.653 | 99.99th=[ 292] 00:14:02.653 bw ( KiB/s): min= 4352, max=18176, per=1.00%, avg=10884.85, stdev=4062.52, samples=20 00:14:02.653 iops : min= 34, max= 142, avg=84.95, stdev=31.73, samples=20 00:14:02.653 lat (msec) : 4=1.15%, 10=29.01%, 20=11.82%, 50=13.45%, 100=32.51% 00:14:02.653 lat (msec) : 250=11.52%, 500=0.54% 00:14:02.653 cpu : usr=0.90%, sys=0.28%, ctx=2882, majf=0, minf=3 00:14:02.653 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.653 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.653 issued rwts: total=800,858,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.653 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.653 job60: (groupid=0, jobs=1): err= 0: pid=70508: Thu Jul 25 08:56:09 2024 00:14:02.653 read: IOPS=95, BW=11.9MiB/s (12.5MB/s)(100MiB/8399msec) 00:14:02.653 slat (usec): min=5, max=2641, avg=61.36, stdev=169.55 00:14:02.653 clat (msec): min=3, max=123, avg=15.00, stdev=14.34 00:14:02.653 lat (msec): min=3, max=123, avg=15.06, stdev=14.35 00:14:02.653 clat percentiles (msec): 00:14:02.653 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8], 00:14:02.653 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 12], 60.00th=[ 14], 00:14:02.653 | 70.00th=[ 16], 80.00th=[ 19], 90.00th=[ 26], 95.00th=[ 32], 00:14:02.653 | 99.00th=[ 105], 99.50th=[ 112], 99.90th=[ 125], 99.95th=[ 125], 00:14:02.653 | 99.99th=[ 125] 00:14:02.653 write: IOPS=109, BW=13.7MiB/s (14.4MB/s)(117MiB/8515msec); 0 zone resets 00:14:02.653 slat (usec): min=29, max=2972, avg=191.86, stdev=277.77 00:14:02.653 clat (msec): min=33, max=334, avg=71.72, stdev=33.46 00:14:02.653 lat (msec): min=33, max=335, avg=71.91, stdev=33.48 00:14:02.653 clat percentiles (msec): 00:14:02.653 | 1.00th=[ 38], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 49], 00:14:02.653 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 72], 00:14:02.653 | 70.00th=[ 80], 80.00th=[ 89], 90.00th=[ 106], 95.00th=[ 118], 00:14:02.653 | 99.00th=[ 228], 99.50th=[ 313], 99.90th=[ 334], 99.95th=[ 334], 00:14:02.653 | 99.99th=[ 334] 00:14:02.653 bw ( KiB/s): min= 1792, max=22016, per=1.09%, avg=11883.25, stdev=4654.69, samples=20 00:14:02.653 iops : min= 14, max= 172, avg=92.75, stdev=36.41, samples=20 00:14:02.653 lat (msec) : 4=0.29%, 10=18.55%, 20=18.95%, 50=19.64%, 100=35.31% 00:14:02.653 lat (msec) : 250=6.91%, 500=0.35% 00:14:02.653 cpu : usr=0.90%, sys=0.40%, ctx=3024, majf=0, minf=1 00:14:02.653 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.653 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.653 issued rwts: total=800,936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.653 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.653 job61: (groupid=0, jobs=1): err= 0: pid=70509: Thu Jul 25 08:56:09 2024 00:14:02.653 read: IOPS=117, BW=14.7MiB/s (15.4MB/s)(120MiB/8172msec) 00:14:02.653 slat (usec): min=4, max=1573, avg=64.70, stdev=149.26 00:14:02.653 clat (usec): min=3983, max=64398, avg=11717.83, stdev=7725.69 00:14:02.653 lat (usec): min=4104, max=64450, avg=11782.53, stdev=7733.66 00:14:02.653 clat percentiles (usec): 00:14:02.653 | 1.00th=[ 4686], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6456], 00:14:02.653 | 30.00th=[ 7046], 40.00th=[ 7898], 50.00th=[ 9503], 60.00th=[11076], 00:14:02.653 | 70.00th=[12780], 80.00th=[15664], 90.00th=[20055], 95.00th=[23725], 00:14:02.653 | 99.00th=[44827], 99.50th=[59507], 99.90th=[64226], 99.95th=[64226], 00:14:02.653 | 99.99th=[64226] 00:14:02.653 write: IOPS=114, BW=14.3MiB/s (15.0MB/s)(124MiB/8637msec); 0 zone resets 00:14:02.653 slat (usec): min=31, max=4091, avg=160.82, stdev=253.90 00:14:02.653 clat (msec): min=16, max=309, avg=69.09, stdev=36.50 00:14:02.653 lat (msec): min=16, max=309, avg=69.25, stdev=36.50 00:14:02.653 clat percentiles (msec): 00:14:02.653 | 1.00th=[ 37], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 45], 00:14:02.653 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 60], 60.00th=[ 65], 00:14:02.653 | 70.00th=[ 73], 80.00th=[ 87], 90.00th=[ 105], 95.00th=[ 126], 00:14:02.653 | 99.00th=[ 243], 99.50th=[ 288], 99.90th=[ 309], 99.95th=[ 309], 00:14:02.653 | 99.99th=[ 309] 00:14:02.653 bw ( KiB/s): min= 4608, max=22528, per=1.15%, avg=12565.45, stdev=5573.89, samples=20 00:14:02.653 iops : min= 36, max= 176, avg=98.05, stdev=43.49, samples=20 00:14:02.653 lat (msec) : 4=0.05%, 10=26.10%, 20=18.41%, 50=20.31%, 100=28.77% 00:14:02.653 lat (msec) : 250=6.00%, 500=0.36% 00:14:02.653 cpu : usr=1.06%, sys=0.37%, ctx=3212, majf=0, minf=5 00:14:02.653 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.654 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.654 issued rwts: total=960,990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.654 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.654 job62: (groupid=0, jobs=1): err= 0: pid=70511: Thu Jul 25 08:56:09 2024 00:14:02.654 read: IOPS=99, BW=12.4MiB/s (13.0MB/s)(100MiB/8080msec) 00:14:02.654 slat (usec): min=4, max=6164, avg=81.60, stdev=332.75 00:14:02.654 clat (usec): min=2828, max=52155, avg=11075.93, stdev=7657.79 00:14:02.654 lat (usec): min=2837, max=52264, avg=11157.52, stdev=7700.17 00:14:02.654 clat percentiles (usec): 00:14:02.654 | 1.00th=[ 3130], 5.00th=[ 3589], 10.00th=[ 3785], 20.00th=[ 4883], 00:14:02.654 | 30.00th=[ 6063], 40.00th=[ 7242], 50.00th=[ 8586], 60.00th=[10290], 00:14:02.654 | 70.00th=[12649], 80.00th=[16450], 90.00th=[21890], 95.00th=[26346], 00:14:02.654 | 99.00th=[36439], 99.50th=[38536], 99.90th=[52167], 99.95th=[52167], 00:14:02.654 | 99.99th=[52167] 00:14:02.654 write: IOPS=102, BW=12.8MiB/s (13.4MB/s)(114MiB/8932msec); 0 zone resets 00:14:02.654 slat (usec): min=40, max=5201, avg=198.18, stdev=363.80 00:14:02.654 clat (msec): min=37, max=243, avg=77.62, stdev=32.55 00:14:02.654 lat (msec): min=37, max=243, avg=77.82, stdev=32.55 00:14:02.654 clat percentiles (msec): 00:14:02.654 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 53], 00:14:02.654 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 78], 00:14:02.654 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 115], 95.00th=[ 140], 00:14:02.654 | 99.00th=[ 205], 99.50th=[ 218], 99.90th=[ 245], 99.95th=[ 245], 00:14:02.654 | 99.99th=[ 245] 00:14:02.654 bw ( KiB/s): min= 255, max=20224, per=1.06%, avg=11590.90, stdev=4104.92, samples=20 00:14:02.654 iops : min= 1, max= 158, avg=90.35, stdev=32.19, samples=20 00:14:02.654 lat (msec) : 4=5.72%, 10=21.70%, 20=12.25%, 50=15.64%, 100=36.87% 00:14:02.654 lat (msec) : 250=7.82% 00:14:02.654 cpu : usr=0.88%, sys=0.33%, ctx=2985, majf=0, minf=3 00:14:02.654 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.654 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.654 issued rwts: total=800,914,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.654 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.654 job63: (groupid=0, jobs=1): err= 0: pid=70516: Thu Jul 25 08:56:09 2024 00:14:02.654 read: IOPS=116, BW=14.6MiB/s (15.3MB/s)(120MiB/8239msec) 00:14:02.654 slat (usec): min=4, max=2644, avg=61.76, stdev=142.47 00:14:02.654 clat (usec): min=4638, max=52765, avg=12087.99, stdev=6951.96 00:14:02.654 lat (usec): min=4646, max=52781, avg=12149.75, stdev=6962.25 00:14:02.654 clat percentiles (usec): 00:14:02.654 | 1.00th=[ 4948], 5.00th=[ 5407], 10.00th=[ 6325], 20.00th=[ 7046], 00:14:02.654 | 30.00th=[ 7963], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[11207], 00:14:02.654 | 70.00th=[13566], 80.00th=[16057], 90.00th=[19530], 95.00th=[26870], 00:14:02.654 | 99.00th=[39060], 99.50th=[43779], 99.90th=[52691], 99.95th=[52691], 00:14:02.654 | 99.99th=[52691] 00:14:02.654 write: IOPS=113, BW=14.1MiB/s (14.8MB/s)(122MiB/8600msec); 0 zone resets 00:14:02.654 slat (usec): min=32, max=9047, avg=172.36, stdev=406.83 00:14:02.654 clat (msec): min=25, max=329, avg=69.99, stdev=35.81 00:14:02.654 lat (msec): min=25, max=329, avg=70.16, stdev=35.82 00:14:02.654 clat percentiles (msec): 00:14:02.654 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 43], 20.00th=[ 48], 00:14:02.654 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 65], 00:14:02.654 | 70.00th=[ 74], 80.00th=[ 88], 90.00th=[ 107], 95.00th=[ 123], 00:14:02.654 | 99.00th=[ 224], 99.50th=[ 288], 99.90th=[ 330], 99.95th=[ 330], 00:14:02.654 | 99.99th=[ 330] 00:14:02.654 bw ( KiB/s): min= 3328, max=20736, per=1.13%, avg=12346.55, stdev=5775.55, samples=20 00:14:02.654 iops : min= 26, max= 162, avg=96.30, stdev=45.11, samples=20 00:14:02.654 lat (msec) : 10=26.13%, 20=18.83%, 50=18.83%, 100=29.49%, 250=6.36% 00:14:02.654 lat (msec) : 500=0.36% 00:14:02.654 cpu : usr=1.00%, sys=0.38%, ctx=3210, majf=0, minf=5 00:14:02.654 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.654 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.654 issued rwts: total=960,973,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.654 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.654 job64: (groupid=0, jobs=1): err= 0: pid=70517: Thu Jul 25 08:56:09 2024 00:14:02.654 read: IOPS=96, BW=12.1MiB/s (12.7MB/s)(100MiB/8258msec) 00:14:02.654 slat (usec): min=4, max=1965, avg=59.18, stdev=132.36 00:14:02.654 clat (usec): min=3419, max=51531, avg=12895.19, stdev=8369.62 00:14:02.654 lat (usec): min=3560, max=51541, avg=12954.37, stdev=8381.17 00:14:02.654 clat percentiles (usec): 00:14:02.654 | 1.00th=[ 3752], 5.00th=[ 4113], 10.00th=[ 5407], 20.00th=[ 6915], 00:14:02.654 | 30.00th=[ 8160], 40.00th=[ 8979], 50.00th=[10552], 60.00th=[12125], 00:14:02.654 | 70.00th=[14484], 80.00th=[16909], 90.00th=[22152], 95.00th=[33817], 00:14:02.654 | 99.00th=[44303], 99.50th=[47449], 99.90th=[51643], 99.95th=[51643], 00:14:02.654 | 99.99th=[51643] 00:14:02.654 write: IOPS=105, BW=13.2MiB/s (13.9MB/s)(116MiB/8747msec); 0 zone resets 00:14:02.654 slat (usec): min=35, max=4762, avg=196.87, stdev=348.88 00:14:02.654 clat (msec): min=35, max=336, avg=74.91, stdev=33.83 00:14:02.654 lat (msec): min=36, max=338, avg=75.10, stdev=33.85 00:14:02.654 clat percentiles (msec): 00:14:02.654 | 1.00th=[ 40], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 51], 00:14:02.654 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 73], 00:14:02.654 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 118], 95.00th=[ 138], 00:14:02.654 | 99.00th=[ 199], 99.50th=[ 209], 99.90th=[ 338], 99.95th=[ 338], 00:14:02.654 | 99.99th=[ 338] 00:14:02.654 bw ( KiB/s): min= 3328, max=21760, per=1.08%, avg=11742.00, stdev=4872.32, samples=20 00:14:02.654 iops : min= 26, max= 170, avg=91.65, stdev=38.09, samples=20 00:14:02.654 lat (msec) : 4=1.74%, 10=19.77%, 20=18.84%, 50=16.81%, 100=34.03% 00:14:02.654 lat (msec) : 250=8.58%, 500=0.23% 00:14:02.654 cpu : usr=0.93%, sys=0.31%, ctx=3056, majf=0, minf=1 00:14:02.654 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.654 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.654 issued rwts: total=800,925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.654 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.654 job65: (groupid=0, jobs=1): err= 0: pid=70518: Thu Jul 25 08:56:09 2024 00:14:02.654 read: IOPS=110, BW=13.8MiB/s (14.5MB/s)(120MiB/8672msec) 00:14:02.654 slat (usec): min=4, max=1236, avg=50.17, stdev=100.47 00:14:02.654 clat (usec): min=4107, max=97622, avg=13151.21, stdev=10802.30 00:14:02.654 lat (usec): min=4284, max=97630, avg=13201.37, stdev=10798.43 00:14:02.654 clat percentiles (usec): 00:14:02.654 | 1.00th=[ 4752], 5.00th=[ 5342], 10.00th=[ 5604], 20.00th=[ 6521], 00:14:02.654 | 30.00th=[ 7504], 40.00th=[ 8848], 50.00th=[10814], 60.00th=[12256], 00:14:02.654 | 70.00th=[14353], 80.00th=[16712], 90.00th=[21627], 95.00th=[27919], 00:14:02.654 | 99.00th=[66847], 99.50th=[93848], 99.90th=[98042], 99.95th=[98042], 00:14:02.654 | 99.99th=[98042] 00:14:02.654 write: IOPS=115, BW=14.4MiB/s (15.1MB/s)(122MiB/8457msec); 0 zone resets 00:14:02.654 slat (usec): min=42, max=5519, avg=193.25, stdev=377.41 00:14:02.654 clat (msec): min=36, max=268, avg=68.64, stdev=27.88 00:14:02.654 lat (msec): min=36, max=268, avg=68.83, stdev=27.90 00:14:02.654 clat percentiles (msec): 00:14:02.654 | 1.00th=[ 40], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 48], 00:14:02.654 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 66], 00:14:02.654 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 107], 95.00th=[ 127], 00:14:02.654 | 99.00th=[ 165], 99.50th=[ 188], 99.90th=[ 271], 99.95th=[ 271], 00:14:02.654 | 99.99th=[ 271] 00:14:02.654 bw ( KiB/s): min= 2048, max=21248, per=1.13%, avg=12373.60, stdev=5709.78, samples=20 00:14:02.654 iops : min= 16, max= 166, avg=96.55, stdev=44.62, samples=20 00:14:02.654 lat (msec) : 10=23.11%, 20=20.42%, 50=18.56%, 100=31.85%, 250=6.00% 00:14:02.654 lat (msec) : 500=0.05% 00:14:02.654 cpu : usr=1.09%, sys=0.43%, ctx=3163, majf=0, minf=3 00:14:02.654 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.654 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.654 issued rwts: total=960,974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.654 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.654 job66: (groupid=0, jobs=1): err= 0: pid=70520: Thu Jul 25 08:56:09 2024 00:14:02.654 read: IOPS=112, BW=14.0MiB/s (14.7MB/s)(120MiB/8568msec) 00:14:02.654 slat (usec): min=5, max=1455, avg=58.95, stdev=118.31 00:14:02.654 clat (msec): min=2, max=144, avg=12.36, stdev=14.17 00:14:02.654 lat (msec): min=3, max=144, avg=12.42, stdev=14.17 00:14:02.654 clat percentiles (msec): 00:14:02.654 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:14:02.654 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 11], 00:14:02.654 | 70.00th=[ 13], 80.00th=[ 16], 90.00th=[ 19], 95.00th=[ 29], 00:14:02.654 | 99.00th=[ 83], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 144], 00:14:02.654 | 99.99th=[ 144] 00:14:02.654 write: IOPS=115, BW=14.4MiB/s (15.2MB/s)(124MiB/8564msec); 0 zone resets 00:14:02.654 slat (usec): min=33, max=6571, avg=200.87, stdev=378.04 00:14:02.654 clat (msec): min=16, max=249, avg=68.37, stdev=29.28 00:14:02.654 lat (msec): min=16, max=249, avg=68.57, stdev=29.28 00:14:02.654 clat percentiles (msec): 00:14:02.654 | 1.00th=[ 29], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 47], 00:14:02.654 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 66], 00:14:02.654 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 126], 00:14:02.654 | 99.00th=[ 180], 99.50th=[ 199], 99.90th=[ 249], 99.95th=[ 249], 00:14:02.654 | 99.99th=[ 249] 00:14:02.654 bw ( KiB/s): min= 5888, max=21248, per=1.21%, avg=13233.63, stdev=4719.32, samples=19 00:14:02.654 iops : min= 46, max= 166, avg=103.21, stdev=36.88, samples=19 00:14:02.654 lat (msec) : 4=1.54%, 10=26.72%, 20=17.08%, 50=17.18%, 100=30.77% 00:14:02.654 lat (msec) : 250=6.72% 00:14:02.655 cpu : usr=1.23%, sys=0.31%, ctx=3231, majf=0, minf=3 00:14:02.655 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.655 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.655 issued rwts: total=960,990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.655 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.655 job67: (groupid=0, jobs=1): err= 0: pid=70521: Thu Jul 25 08:56:09 2024 00:14:02.655 read: IOPS=98, BW=12.3MiB/s (12.9MB/s)(100MiB/8121msec) 00:14:02.655 slat (usec): min=5, max=2418, avg=55.76, stdev=131.00 00:14:02.655 clat (msec): min=2, max=157, avg=17.10, stdev=20.36 00:14:02.655 lat (msec): min=2, max=157, avg=17.15, stdev=20.36 00:14:02.655 clat percentiles (msec): 00:14:02.655 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:14:02.655 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 14], 00:14:02.655 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 29], 95.00th=[ 39], 00:14:02.655 | 99.00th=[ 138], 99.50th=[ 148], 99.90th=[ 159], 99.95th=[ 159], 00:14:02.655 | 99.99th=[ 159] 00:14:02.655 write: IOPS=101, BW=12.7MiB/s (13.4MB/s)(106MiB/8312msec); 0 zone resets 00:14:02.655 slat (usec): min=38, max=3676, avg=173.50, stdev=262.90 00:14:02.655 clat (msec): min=36, max=283, avg=77.87, stdev=33.84 00:14:02.655 lat (msec): min=36, max=283, avg=78.04, stdev=33.85 00:14:02.655 clat percentiles (msec): 00:14:02.655 | 1.00th=[ 40], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 53], 00:14:02.655 | 30.00th=[ 58], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 78], 00:14:02.655 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 116], 95.00th=[ 144], 00:14:02.655 | 99.00th=[ 215], 99.50th=[ 230], 99.90th=[ 284], 99.95th=[ 284], 00:14:02.655 | 99.99th=[ 284] 00:14:02.655 bw ( KiB/s): min= 509, max=20183, per=0.98%, avg=10748.90, stdev=5464.93, samples=20 00:14:02.655 iops : min= 3, max= 157, avg=83.75, stdev=42.86, samples=20 00:14:02.655 lat (msec) : 4=0.73%, 10=17.55%, 20=20.95%, 50=15.54%, 100=36.31% 00:14:02.655 lat (msec) : 250=8.86%, 500=0.06% 00:14:02.655 cpu : usr=0.85%, sys=0.30%, ctx=2894, majf=0, minf=1 00:14:02.655 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.655 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.655 issued rwts: total=800,847,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.655 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.655 job68: (groupid=0, jobs=1): err= 0: pid=70522: Thu Jul 25 08:56:09 2024 00:14:02.655 read: IOPS=116, BW=14.6MiB/s (15.3MB/s)(120MiB/8226msec) 00:14:02.655 slat (usec): min=5, max=1233, avg=53.24, stdev=100.00 00:14:02.655 clat (usec): min=3528, max=47370, avg=11544.68, stdev=7478.54 00:14:02.655 lat (usec): min=3550, max=47378, avg=11597.92, stdev=7485.84 00:14:02.655 clat percentiles (usec): 00:14:02.655 | 1.00th=[ 3916], 5.00th=[ 4621], 10.00th=[ 5014], 20.00th=[ 5735], 00:14:02.655 | 30.00th=[ 6325], 40.00th=[ 6980], 50.00th=[ 8455], 60.00th=[10421], 00:14:02.655 | 70.00th=[13435], 80.00th=[17433], 90.00th=[22676], 95.00th=[26346], 00:14:02.655 | 99.00th=[35390], 99.50th=[36963], 99.90th=[47449], 99.95th=[47449], 00:14:02.655 | 99.99th=[47449] 00:14:02.655 write: IOPS=111, BW=14.0MiB/s (14.7MB/s)(121MiB/8655msec); 0 zone resets 00:14:02.655 slat (usec): min=31, max=3615, avg=171.19, stdev=274.66 00:14:02.655 clat (msec): min=36, max=331, avg=70.79, stdev=31.26 00:14:02.655 lat (msec): min=36, max=331, avg=70.96, stdev=31.26 00:14:02.655 clat percentiles (msec): 00:14:02.655 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 43], 20.00th=[ 48], 00:14:02.655 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 69], 00:14:02.655 | 70.00th=[ 80], 80.00th=[ 92], 90.00th=[ 108], 95.00th=[ 120], 00:14:02.655 | 99.00th=[ 203], 99.50th=[ 226], 99.90th=[ 334], 99.95th=[ 334], 00:14:02.655 | 99.99th=[ 334] 00:14:02.655 bw ( KiB/s): min= 3840, max=20653, per=1.12%, avg=12278.35, stdev=5458.11, samples=20 00:14:02.655 iops : min= 30, max= 161, avg=95.75, stdev=42.59, samples=20 00:14:02.655 lat (msec) : 4=0.67%, 10=27.96%, 20=14.32%, 50=20.07%, 100=29.77% 00:14:02.655 lat (msec) : 250=7.05%, 500=0.16% 00:14:02.655 cpu : usr=1.10%, sys=0.25%, ctx=3236, majf=0, minf=3 00:14:02.655 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.655 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.655 issued rwts: total=960,968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.655 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.655 job69: (groupid=0, jobs=1): err= 0: pid=70523: Thu Jul 25 08:56:09 2024 00:14:02.655 read: IOPS=110, BW=13.8MiB/s (14.5MB/s)(120MiB/8689msec) 00:14:02.655 slat (usec): min=4, max=2686, avg=69.65, stdev=180.45 00:14:02.655 clat (usec): min=1993, max=56119, avg=12763.05, stdev=7728.46 00:14:02.655 lat (usec): min=2056, max=56136, avg=12832.71, stdev=7729.91 00:14:02.655 clat percentiles (usec): 00:14:02.655 | 1.00th=[ 3392], 5.00th=[ 5342], 10.00th=[ 6325], 20.00th=[ 7504], 00:14:02.655 | 30.00th=[ 8455], 40.00th=[ 9503], 50.00th=[10945], 60.00th=[12518], 00:14:02.655 | 70.00th=[13829], 80.00th=[15926], 90.00th=[20841], 95.00th=[25560], 00:14:02.655 | 99.00th=[46400], 99.50th=[53216], 99.90th=[56361], 99.95th=[56361], 00:14:02.655 | 99.99th=[56361] 00:14:02.655 write: IOPS=117, BW=14.7MiB/s (15.4MB/s)(125MiB/8521msec); 0 zone resets 00:14:02.655 slat (usec): min=42, max=9004, avg=208.34, stdev=481.71 00:14:02.655 clat (msec): min=10, max=180, avg=67.39, stdev=26.43 00:14:02.655 lat (msec): min=10, max=180, avg=67.60, stdev=26.44 00:14:02.655 clat percentiles (msec): 00:14:02.655 | 1.00th=[ 28], 5.00th=[ 40], 10.00th=[ 44], 20.00th=[ 48], 00:14:02.655 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 65], 00:14:02.655 | 70.00th=[ 74], 80.00th=[ 87], 90.00th=[ 102], 95.00th=[ 121], 00:14:02.655 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 182], 00:14:02.655 | 99.99th=[ 182] 00:14:02.655 bw ( KiB/s): min= 2043, max=21504, per=1.16%, avg=12705.35, stdev=5694.15, samples=20 00:14:02.655 iops : min= 15, max= 168, avg=99.05, stdev=44.63, samples=20 00:14:02.655 lat (msec) : 2=0.05%, 4=0.97%, 10=20.70%, 20=22.08%, 50=17.95% 00:14:02.655 lat (msec) : 100=32.53%, 250=5.71% 00:14:02.655 cpu : usr=0.92%, sys=0.44%, ctx=3448, majf=0, minf=5 00:14:02.655 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.655 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.655 issued rwts: total=960,1001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.655 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.655 job70: (groupid=0, jobs=1): err= 0: pid=70524: Thu Jul 25 08:56:09 2024 00:14:02.655 read: IOPS=78, BW=9.85MiB/s (10.3MB/s)(80.0MiB/8120msec) 00:14:02.655 slat (usec): min=4, max=2229, avg=72.46, stdev=182.79 00:14:02.655 clat (msec): min=6, max=101, avg=21.44, stdev=14.94 00:14:02.655 lat (msec): min=6, max=102, avg=21.52, stdev=14.94 00:14:02.655 clat percentiles (msec): 00:14:02.655 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:14:02.655 | 30.00th=[ 13], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 21], 00:14:02.655 | 70.00th=[ 24], 80.00th=[ 27], 90.00th=[ 38], 95.00th=[ 50], 00:14:02.655 | 99.00th=[ 87], 99.50th=[ 93], 99.90th=[ 102], 99.95th=[ 102], 00:14:02.655 | 99.99th=[ 102] 00:14:02.655 write: IOPS=82, BW=10.3MiB/s (10.8MB/s)(85.5MiB/8336msec); 0 zone resets 00:14:02.655 slat (usec): min=36, max=3796, avg=156.81, stdev=250.58 00:14:02.655 clat (msec): min=9, max=328, avg=96.25, stdev=40.07 00:14:02.655 lat (msec): min=9, max=328, avg=96.40, stdev=40.07 00:14:02.655 clat percentiles (msec): 00:14:02.655 | 1.00th=[ 24], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 69], 00:14:02.655 | 30.00th=[ 75], 40.00th=[ 82], 50.00th=[ 88], 60.00th=[ 96], 00:14:02.655 | 70.00th=[ 105], 80.00th=[ 114], 90.00th=[ 133], 95.00th=[ 157], 00:14:02.655 | 99.00th=[ 279], 99.50th=[ 309], 99.90th=[ 330], 99.95th=[ 330], 00:14:02.655 | 99.99th=[ 330] 00:14:02.655 bw ( KiB/s): min= 1792, max=14818, per=0.84%, avg=9116.95, stdev=3960.23, samples=19 00:14:02.655 iops : min= 14, max= 115, avg=71.00, stdev=30.82, samples=19 00:14:02.655 lat (msec) : 10=7.85%, 20=20.69%, 50=18.43%, 100=35.20%, 250=16.77% 00:14:02.655 lat (msec) : 500=1.06% 00:14:02.655 cpu : usr=0.68%, sys=0.25%, ctx=2269, majf=0, minf=3 00:14:02.655 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.655 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.655 issued rwts: total=640,684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.655 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.656 job71: (groupid=0, jobs=1): err= 0: pid=70525: Thu Jul 25 08:56:09 2024 00:14:02.656 read: IOPS=87, BW=10.9MiB/s (11.4MB/s)(80.0MiB/7331msec) 00:14:02.656 slat (usec): min=5, max=964, avg=57.35, stdev=116.15 00:14:02.656 clat (usec): min=4760, max=40784, avg=11460.47, stdev=4610.41 00:14:02.656 lat (usec): min=4802, max=40812, avg=11517.82, stdev=4612.05 00:14:02.656 clat percentiles (usec): 00:14:02.656 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 7111], 20.00th=[ 8225], 00:14:02.656 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10290], 60.00th=[11207], 00:14:02.656 | 70.00th=[12387], 80.00th=[13698], 90.00th=[16712], 95.00th=[20317], 00:14:02.656 | 99.00th=[30278], 99.50th=[32900], 99.90th=[40633], 99.95th=[40633], 00:14:02.656 | 99.99th=[40633] 00:14:02.656 write: IOPS=70, BW=9034KiB/s (9251kB/s)(80.0MiB/9068msec); 0 zone resets 00:14:02.656 slat (usec): min=41, max=11370, avg=235.37, stdev=625.88 00:14:02.656 clat (msec): min=58, max=352, avg=112.49, stdev=49.09 00:14:02.656 lat (msec): min=58, max=352, avg=112.73, stdev=49.11 00:14:02.656 clat percentiles (msec): 00:14:02.656 | 1.00th=[ 59], 5.00th=[ 62], 10.00th=[ 66], 20.00th=[ 71], 00:14:02.656 | 30.00th=[ 77], 40.00th=[ 86], 50.00th=[ 99], 60.00th=[ 116], 00:14:02.656 | 70.00th=[ 132], 80.00th=[ 148], 90.00th=[ 176], 95.00th=[ 211], 00:14:02.656 | 99.00th=[ 292], 99.50th=[ 296], 99.90th=[ 351], 99.95th=[ 351], 00:14:02.656 | 99.99th=[ 351] 00:14:02.656 bw ( KiB/s): min= 3328, max=15104, per=0.77%, avg=8418.74, stdev=3400.80, samples=19 00:14:02.656 iops : min= 26, max= 118, avg=65.58, stdev=26.63, samples=19 00:14:02.656 lat (msec) : 10=22.42%, 20=25.00%, 50=2.58%, 100=25.55%, 250=23.59% 00:14:02.656 lat (msec) : 500=0.86% 00:14:02.656 cpu : usr=0.64%, sys=0.31%, ctx=2146, majf=0, minf=3 00:14:02.656 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.656 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.656 issued rwts: total=640,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.656 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.656 job72: (groupid=0, jobs=1): err= 0: pid=70526: Thu Jul 25 08:56:09 2024 00:14:02.656 read: IOPS=77, BW=9895KiB/s (10.1MB/s)(80.0MiB/8279msec) 00:14:02.656 slat (usec): min=4, max=3256, avg=77.14, stdev=223.36 00:14:02.656 clat (msec): min=6, max=101, avg=21.49, stdev=15.42 00:14:02.656 lat (msec): min=6, max=101, avg=21.56, stdev=15.41 00:14:02.656 clat percentiles (msec): 00:14:02.656 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:14:02.656 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 17], 60.00th=[ 21], 00:14:02.656 | 70.00th=[ 24], 80.00th=[ 28], 90.00th=[ 40], 95.00th=[ 53], 00:14:02.656 | 99.00th=[ 85], 99.50th=[ 89], 99.90th=[ 103], 99.95th=[ 103], 00:14:02.656 | 99.99th=[ 103] 00:14:02.656 write: IOPS=82, BW=10.3MiB/s (10.8MB/s)(85.9MiB/8326msec); 0 zone resets 00:14:02.656 slat (usec): min=38, max=7939, avg=192.10, stdev=436.05 00:14:02.656 clat (msec): min=27, max=331, avg=95.85, stdev=39.60 00:14:02.656 lat (msec): min=27, max=331, avg=96.04, stdev=39.60 00:14:02.656 clat percentiles (msec): 00:14:02.656 | 1.00th=[ 35], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 68], 00:14:02.656 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 92], 00:14:02.656 | 70.00th=[ 102], 80.00th=[ 118], 90.00th=[ 146], 95.00th=[ 178], 00:14:02.656 | 99.00th=[ 247], 99.50th=[ 313], 99.90th=[ 334], 99.95th=[ 334], 00:14:02.656 | 99.99th=[ 334] 00:14:02.656 bw ( KiB/s): min= 1792, max=15840, per=0.84%, avg=9144.21, stdev=4522.90, samples=19 00:14:02.656 iops : min= 14, max= 123, avg=71.21, stdev=35.26, samples=19 00:14:02.656 lat (msec) : 10=8.59%, 20=19.74%, 50=17.56%, 100=37.98%, 250=15.67% 00:14:02.656 lat (msec) : 500=0.45% 00:14:02.656 cpu : usr=0.61%, sys=0.37%, ctx=2278, majf=0, minf=3 00:14:02.656 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.656 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.656 issued rwts: total=640,687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.656 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.656 job73: (groupid=0, jobs=1): err= 0: pid=70527: Thu Jul 25 08:56:09 2024 00:14:02.656 read: IOPS=65, BW=8325KiB/s (8525kB/s)(60.0MiB/7380msec) 00:14:02.656 slat (usec): min=6, max=5488, avg=63.80, stdev=269.67 00:14:02.656 clat (msec): min=3, max=106, avg=13.77, stdev=11.15 00:14:02.656 lat (msec): min=3, max=106, avg=13.84, stdev=11.19 00:14:02.656 clat percentiles (msec): 00:14:02.656 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 9], 00:14:02.656 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 12], 60.00th=[ 13], 00:14:02.656 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 21], 95.00th=[ 27], 00:14:02.656 | 99.00th=[ 67], 99.50th=[ 81], 99.90th=[ 107], 99.95th=[ 107], 00:14:02.656 | 99.99th=[ 107] 00:14:02.656 write: IOPS=62, BW=7970KiB/s (8162kB/s)(71.8MiB/9218msec); 0 zone resets 00:14:02.656 slat (usec): min=37, max=5180, avg=227.30, stdev=415.61 00:14:02.656 clat (msec): min=58, max=566, avg=127.75, stdev=63.32 00:14:02.656 lat (msec): min=60, max=566, avg=127.97, stdev=63.29 00:14:02.656 clat percentiles (msec): 00:14:02.656 | 1.00th=[ 64], 5.00th=[ 69], 10.00th=[ 73], 20.00th=[ 82], 00:14:02.656 | 30.00th=[ 90], 40.00th=[ 97], 50.00th=[ 109], 60.00th=[ 122], 00:14:02.656 | 70.00th=[ 136], 80.00th=[ 167], 90.00th=[ 209], 95.00th=[ 264], 00:14:02.656 | 99.00th=[ 342], 99.50th=[ 380], 99.90th=[ 567], 99.95th=[ 567], 00:14:02.656 | 99.99th=[ 567] 00:14:02.656 bw ( KiB/s): min= 2299, max=13312, per=0.66%, avg=7255.60, stdev=3180.12, samples=20 00:14:02.656 iops : min= 17, max= 104, avg=56.50, stdev=24.93, samples=20 00:14:02.656 lat (msec) : 4=0.19%, 10=18.69%, 20=21.63%, 50=3.80%, 100=24.67% 00:14:02.656 lat (msec) : 250=28.08%, 500=2.85%, 750=0.09% 00:14:02.656 cpu : usr=0.56%, sys=0.23%, ctx=1832, majf=0, minf=3 00:14:02.656 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.656 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.656 issued rwts: total=480,574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.656 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.656 job74: (groupid=0, jobs=1): err= 0: pid=70528: Thu Jul 25 08:56:09 2024 00:14:02.656 read: IOPS=84, BW=10.6MiB/s (11.1MB/s)(80.0MiB/7565msec) 00:14:02.656 slat (usec): min=4, max=1395, avg=64.38, stdev=135.49 00:14:02.656 clat (usec): min=6763, max=45937, avg=13640.81, stdev=7333.44 00:14:02.656 lat (usec): min=6774, max=45990, avg=13705.19, stdev=7335.86 00:14:02.656 clat percentiles (usec): 00:14:02.656 | 1.00th=[ 7111], 5.00th=[ 7898], 10.00th=[ 8291], 20.00th=[ 8979], 00:14:02.656 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[11076], 60.00th=[11863], 00:14:02.656 | 70.00th=[13566], 80.00th=[16057], 90.00th=[22938], 95.00th=[31589], 00:14:02.656 | 99.00th=[43254], 99.50th=[45351], 99.90th=[45876], 99.95th=[45876], 00:14:02.656 | 99.99th=[45876] 00:14:02.656 write: IOPS=72, BW=9304KiB/s (9527kB/s)(81.4MiB/8956msec); 0 zone resets 00:14:02.656 slat (usec): min=40, max=6836, avg=208.22, stdev=434.87 00:14:02.656 clat (msec): min=48, max=383, avg=108.94, stdev=51.14 00:14:02.656 lat (msec): min=48, max=384, avg=109.15, stdev=51.16 00:14:02.656 clat percentiles (msec): 00:14:02.656 | 1.00th=[ 57], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 70], 00:14:02.656 | 30.00th=[ 77], 40.00th=[ 84], 50.00th=[ 92], 60.00th=[ 104], 00:14:02.656 | 70.00th=[ 122], 80.00th=[ 138], 90.00th=[ 180], 95.00th=[ 211], 00:14:02.656 | 99.00th=[ 292], 99.50th=[ 309], 99.90th=[ 384], 99.95th=[ 384], 00:14:02.656 | 99.99th=[ 384] 00:14:02.656 bw ( KiB/s): min= 1024, max=15104, per=0.75%, avg=8239.80, stdev=3952.52, samples=20 00:14:02.656 iops : min= 8, max= 118, avg=64.20, stdev=30.92, samples=20 00:14:02.656 lat (msec) : 10=17.35%, 20=25.79%, 50=6.58%, 100=28.66%, 250=19.98% 00:14:02.656 lat (msec) : 500=1.63% 00:14:02.656 cpu : usr=0.71%, sys=0.31%, ctx=2168, majf=0, minf=3 00:14:02.656 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.656 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.656 issued rwts: total=640,651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.656 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.656 job75: (groupid=0, jobs=1): err= 0: pid=70529: Thu Jul 25 08:56:09 2024 00:14:02.656 read: IOPS=80, BW=10.0MiB/s (10.5MB/s)(80.0MiB/7989msec) 00:14:02.656 slat (usec): min=4, max=3551, avg=79.27, stdev=259.62 00:14:02.656 clat (msec): min=6, max=117, avg=16.67, stdev=12.29 00:14:02.656 lat (msec): min=6, max=117, avg=16.75, stdev=12.29 00:14:02.656 clat percentiles (msec): 00:14:02.656 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 10], 00:14:02.656 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 15], 60.00th=[ 16], 00:14:02.656 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 26], 95.00th=[ 32], 00:14:02.656 | 99.00th=[ 91], 99.50th=[ 92], 99.90th=[ 117], 99.95th=[ 117], 00:14:02.656 | 99.99th=[ 117] 00:14:02.656 write: IOPS=78, BW=9.81MiB/s (10.3MB/s)(85.6MiB/8727msec); 0 zone resets 00:14:02.656 slat (usec): min=41, max=3818, avg=219.88, stdev=465.26 00:14:02.656 clat (msec): min=36, max=401, avg=100.76, stdev=44.83 00:14:02.656 lat (msec): min=36, max=404, avg=100.98, stdev=44.91 00:14:02.656 clat percentiles (msec): 00:14:02.656 | 1.00th=[ 43], 5.00th=[ 64], 10.00th=[ 67], 20.00th=[ 73], 00:14:02.656 | 30.00th=[ 79], 40.00th=[ 83], 50.00th=[ 89], 60.00th=[ 95], 00:14:02.656 | 70.00th=[ 105], 80.00th=[ 120], 90.00th=[ 142], 95.00th=[ 192], 00:14:02.656 | 99.00th=[ 296], 99.50th=[ 342], 99.90th=[ 401], 99.95th=[ 401], 00:14:02.656 | 99.99th=[ 401] 00:14:02.656 bw ( KiB/s): min= 1792, max=14307, per=0.79%, avg=8662.80, stdev=3848.46, samples=20 00:14:02.656 iops : min= 14, max= 111, avg=67.55, stdev=29.93, samples=20 00:14:02.656 lat (msec) : 10=10.94%, 20=26.11%, 50=11.02%, 100=33.13%, 250=17.74% 00:14:02.656 lat (msec) : 500=1.06% 00:14:02.656 cpu : usr=0.75%, sys=0.24%, ctx=2231, majf=0, minf=6 00:14:02.656 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.656 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.656 issued rwts: total=640,685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.657 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.657 job76: (groupid=0, jobs=1): err= 0: pid=70530: Thu Jul 25 08:56:09 2024 00:14:02.657 read: IOPS=76, BW=9756KiB/s (9990kB/s)(80.0MiB/8397msec) 00:14:02.657 slat (usec): min=5, max=4079, avg=76.00, stdev=264.04 00:14:02.657 clat (msec): min=6, max=134, avg=20.03, stdev=14.30 00:14:02.657 lat (msec): min=6, max=134, avg=20.10, stdev=14.29 00:14:02.657 clat percentiles (msec): 00:14:02.657 | 1.00th=[ 8], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:14:02.657 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 20], 00:14:02.657 | 70.00th=[ 23], 80.00th=[ 26], 90.00th=[ 33], 95.00th=[ 42], 00:14:02.657 | 99.00th=[ 108], 99.50th=[ 126], 99.90th=[ 136], 99.95th=[ 136], 00:14:02.657 | 99.99th=[ 136] 00:14:02.657 write: IOPS=84, BW=10.5MiB/s (11.0MB/s)(88.6MiB/8426msec); 0 zone resets 00:14:02.657 slat (usec): min=33, max=3108, avg=180.30, stdev=304.28 00:14:02.657 clat (msec): min=13, max=315, avg=94.22, stdev=36.73 00:14:02.657 lat (msec): min=13, max=315, avg=94.40, stdev=36.74 00:14:02.657 clat percentiles (msec): 00:14:02.657 | 1.00th=[ 18], 5.00th=[ 60], 10.00th=[ 65], 20.00th=[ 69], 00:14:02.657 | 30.00th=[ 74], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 91], 00:14:02.657 | 70.00th=[ 102], 80.00th=[ 114], 90.00th=[ 142], 95.00th=[ 176], 00:14:02.657 | 99.00th=[ 222], 99.50th=[ 241], 99.90th=[ 317], 99.95th=[ 317], 00:14:02.657 | 99.99th=[ 317] 00:14:02.657 bw ( KiB/s): min= 1792, max=14848, per=0.82%, avg=8980.30, stdev=4496.44, samples=20 00:14:02.657 iops : min= 14, max= 116, avg=69.95, stdev=35.08, samples=20 00:14:02.657 lat (msec) : 10=7.34%, 20=22.91%, 50=17.49%, 100=35.66%, 250=16.38% 00:14:02.657 lat (msec) : 500=0.22% 00:14:02.657 cpu : usr=0.64%, sys=0.38%, ctx=2235, majf=0, minf=7 00:14:02.657 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.657 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.657 issued rwts: total=640,709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.657 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.657 job77: (groupid=0, jobs=1): err= 0: pid=70531: Thu Jul 25 08:56:09 2024 00:14:02.657 read: IOPS=63, BW=8191KiB/s (8387kB/s)(60.0MiB/7501msec) 00:14:02.657 slat (usec): min=5, max=4070, avg=63.48, stdev=220.89 00:14:02.657 clat (msec): min=4, max=181, avg=20.57, stdev=25.45 00:14:02.657 lat (msec): min=4, max=181, avg=20.63, stdev=25.45 00:14:02.657 clat percentiles (msec): 00:14:02.657 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:14:02.657 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 15], 00:14:02.657 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 32], 95.00th=[ 86], 00:14:02.657 | 99.00th=[ 138], 99.50th=[ 178], 99.90th=[ 182], 99.95th=[ 182], 00:14:02.657 | 99.99th=[ 182] 00:14:02.657 write: IOPS=68, BW=8745KiB/s (8955kB/s)(75.1MiB/8797msec); 0 zone resets 00:14:02.657 slat (usec): min=40, max=3830, avg=175.57, stdev=279.99 00:14:02.657 clat (msec): min=57, max=374, avg=116.25, stdev=54.06 00:14:02.657 lat (msec): min=58, max=374, avg=116.43, stdev=54.07 00:14:02.657 clat percentiles (msec): 00:14:02.657 | 1.00th=[ 59], 5.00th=[ 66], 10.00th=[ 72], 20.00th=[ 78], 00:14:02.657 | 30.00th=[ 84], 40.00th=[ 92], 50.00th=[ 100], 60.00th=[ 112], 00:14:02.657 | 70.00th=[ 125], 80.00th=[ 142], 90.00th=[ 182], 95.00th=[ 224], 00:14:02.657 | 99.00th=[ 313], 99.50th=[ 359], 99.90th=[ 376], 99.95th=[ 376], 00:14:02.657 | 99.99th=[ 376] 00:14:02.657 bw ( KiB/s): min= 1792, max=12288, per=0.70%, avg=7602.30, stdev=3564.64, samples=20 00:14:02.657 iops : min= 14, max= 96, avg=59.25, stdev=27.83, samples=20 00:14:02.657 lat (msec) : 10=7.49%, 20=28.58%, 50=5.18%, 100=29.79%, 250=26.64% 00:14:02.657 lat (msec) : 500=2.31% 00:14:02.657 cpu : usr=0.61%, sys=0.20%, ctx=1822, majf=0, minf=3 00:14:02.657 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.657 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.657 issued rwts: total=480,601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.657 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.657 job78: (groupid=0, jobs=1): err= 0: pid=70532: Thu Jul 25 08:56:09 2024 00:14:02.657 read: IOPS=81, BW=10.2MiB/s (10.7MB/s)(80.0MiB/7814msec) 00:14:02.657 slat (usec): min=5, max=3560, avg=81.40, stdev=244.97 00:14:02.657 clat (usec): min=6905, max=79769, avg=15864.21, stdev=8804.25 00:14:02.657 lat (usec): min=6946, max=79798, avg=15945.61, stdev=8840.09 00:14:02.657 clat percentiles (usec): 00:14:02.657 | 1.00th=[ 7177], 5.00th=[ 8029], 10.00th=[ 9110], 20.00th=[10814], 00:14:02.657 | 30.00th=[11600], 40.00th=[12518], 50.00th=[13698], 60.00th=[14877], 00:14:02.657 | 70.00th=[16581], 80.00th=[19006], 90.00th=[23462], 95.00th=[29754], 00:14:02.657 | 99.00th=[59507], 99.50th=[66847], 99.90th=[80217], 99.95th=[80217], 00:14:02.657 | 99.99th=[80217] 00:14:02.657 write: IOPS=76, BW=9760KiB/s (9994kB/s)(83.6MiB/8774msec); 0 zone resets 00:14:02.657 slat (usec): min=33, max=7599, avg=182.40, stdev=441.99 00:14:02.657 clat (msec): min=58, max=443, avg=103.62, stdev=49.18 00:14:02.657 lat (msec): min=58, max=443, avg=103.81, stdev=49.22 00:14:02.657 clat percentiles (msec): 00:14:02.657 | 1.00th=[ 61], 5.00th=[ 63], 10.00th=[ 66], 20.00th=[ 71], 00:14:02.657 | 30.00th=[ 78], 40.00th=[ 83], 50.00th=[ 89], 60.00th=[ 95], 00:14:02.657 | 70.00th=[ 108], 80.00th=[ 126], 90.00th=[ 155], 95.00th=[ 201], 00:14:02.657 | 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 443], 99.95th=[ 443], 00:14:02.657 | 99.99th=[ 443] 00:14:02.657 bw ( KiB/s): min= 1788, max=15360, per=0.78%, avg=8470.80, stdev=4118.03, samples=20 00:14:02.657 iops : min= 13, max= 120, avg=66.00, stdev=32.32, samples=20 00:14:02.657 lat (msec) : 10=6.34%, 20=33.92%, 50=7.79%, 100=33.92%, 250=16.73% 00:14:02.657 lat (msec) : 500=1.30% 00:14:02.657 cpu : usr=0.65%, sys=0.30%, ctx=2233, majf=0, minf=3 00:14:02.657 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.657 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.657 issued rwts: total=640,669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.657 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.657 job79: (groupid=0, jobs=1): err= 0: pid=70533: Thu Jul 25 08:56:09 2024 00:14:02.657 read: IOPS=79, BW=9.96MiB/s (10.4MB/s)(80.0MiB/8034msec) 00:14:02.657 slat (usec): min=5, max=2306, avg=60.39, stdev=148.40 00:14:02.657 clat (msec): min=7, max=153, avg=21.62, stdev=17.74 00:14:02.657 lat (msec): min=7, max=153, avg=21.69, stdev=17.74 00:14:02.657 clat percentiles (msec): 00:14:02.657 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 12], 00:14:02.657 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 19], 60.00th=[ 21], 00:14:02.657 | 70.00th=[ 23], 80.00th=[ 26], 90.00th=[ 35], 95.00th=[ 44], 00:14:02.657 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 155], 99.95th=[ 155], 00:14:02.657 | 99.99th=[ 155] 00:14:02.657 write: IOPS=85, BW=10.6MiB/s (11.1MB/s)(88.4MiB/8311msec); 0 zone resets 00:14:02.657 slat (usec): min=29, max=4696, avg=204.80, stdev=422.95 00:14:02.657 clat (msec): min=39, max=391, avg=92.86, stdev=38.90 00:14:02.657 lat (msec): min=39, max=392, avg=93.07, stdev=38.94 00:14:02.657 clat percentiles (msec): 00:14:02.657 | 1.00th=[ 56], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 68], 00:14:02.657 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 91], 00:14:02.657 | 70.00th=[ 97], 80.00th=[ 110], 90.00th=[ 130], 95.00th=[ 148], 00:14:02.657 | 99.00th=[ 218], 99.50th=[ 380], 99.90th=[ 393], 99.95th=[ 393], 00:14:02.657 | 99.99th=[ 393] 00:14:02.657 bw ( KiB/s): min= 1788, max=15872, per=0.82%, avg=8958.45, stdev=4292.42, samples=20 00:14:02.657 iops : min= 13, max= 124, avg=69.75, stdev=33.78, samples=20 00:14:02.657 lat (msec) : 10=5.79%, 20=21.68%, 50=18.56%, 100=38.68%, 250=14.77% 00:14:02.657 lat (msec) : 500=0.52% 00:14:02.657 cpu : usr=0.66%, sys=0.30%, ctx=2244, majf=0, minf=5 00:14:02.657 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.657 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.657 issued rwts: total=640,707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.657 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.657 job80: (groupid=0, jobs=1): err= 0: pid=70534: Thu Jul 25 08:56:09 2024 00:14:02.657 read: IOPS=76, BW=9818KiB/s (10.1MB/s)(80.0MiB/8344msec) 00:14:02.657 slat (usec): min=4, max=3094, avg=60.52, stdev=169.22 00:14:02.657 clat (usec): min=1536, max=173518, avg=20199.85, stdev=23200.00 00:14:02.657 lat (msec): min=4, max=173, avg=20.26, stdev=23.20 00:14:02.657 clat percentiles (msec): 00:14:02.657 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:14:02.657 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 15], 60.00th=[ 18], 00:14:02.657 | 70.00th=[ 21], 80.00th=[ 24], 90.00th=[ 32], 95.00th=[ 41], 00:14:02.657 | 99.00th=[ 144], 99.50th=[ 159], 99.90th=[ 174], 99.95th=[ 174], 00:14:02.657 | 99.99th=[ 174] 00:14:02.657 write: IOPS=86, BW=10.9MiB/s (11.4MB/s)(91.5MiB/8432msec); 0 zone resets 00:14:02.657 slat (usec): min=34, max=3371, avg=157.20, stdev=243.69 00:14:02.657 clat (msec): min=24, max=304, avg=91.08, stdev=36.18 00:14:02.657 lat (msec): min=24, max=304, avg=91.24, stdev=36.19 00:14:02.657 clat percentiles (msec): 00:14:02.657 | 1.00th=[ 44], 5.00th=[ 62], 10.00th=[ 65], 20.00th=[ 69], 00:14:02.657 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 80], 60.00th=[ 87], 00:14:02.657 | 70.00th=[ 94], 80.00th=[ 108], 90.00th=[ 127], 95.00th=[ 165], 00:14:02.657 | 99.00th=[ 241], 99.50th=[ 271], 99.90th=[ 305], 99.95th=[ 305], 00:14:02.657 | 99.99th=[ 305] 00:14:02.657 bw ( KiB/s): min= 2043, max=15616, per=0.89%, avg=9753.42, stdev=4322.64, samples=19 00:14:02.657 iops : min= 15, max= 122, avg=75.95, stdev=34.07, samples=19 00:14:02.657 lat (msec) : 2=0.07%, 10=11.95%, 20=17.93%, 50=15.09%, 100=40.74% 00:14:02.657 lat (msec) : 250=13.70%, 500=0.51% 00:14:02.657 cpu : usr=0.68%, sys=0.28%, ctx=2326, majf=0, minf=5 00:14:02.657 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.657 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.658 issued rwts: total=640,732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.658 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.658 job81: (groupid=0, jobs=1): err= 0: pid=70535: Thu Jul 25 08:56:09 2024 00:14:02.658 read: IOPS=81, BW=10.2MiB/s (10.7MB/s)(80.0MiB/7850msec) 00:14:02.658 slat (usec): min=4, max=1134, avg=54.92, stdev=114.72 00:14:02.658 clat (msec): min=5, max=324, avg=18.70, stdev=27.53 00:14:02.658 lat (msec): min=5, max=324, avg=18.75, stdev=27.53 00:14:02.658 clat percentiles (msec): 00:14:02.658 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:14:02.658 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 15], 00:14:02.658 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 28], 95.00th=[ 44], 00:14:02.658 | 99.00th=[ 215], 99.50th=[ 236], 99.90th=[ 326], 99.95th=[ 326], 00:14:02.658 | 99.99th=[ 326] 00:14:02.658 write: IOPS=75, BW=9708KiB/s (9941kB/s)(81.0MiB/8544msec); 0 zone resets 00:14:02.658 slat (usec): min=41, max=5221, avg=184.08, stdev=357.32 00:14:02.658 clat (msec): min=39, max=543, avg=104.20, stdev=60.22 00:14:02.658 lat (msec): min=39, max=544, avg=104.39, stdev=60.24 00:14:02.658 clat percentiles (msec): 00:14:02.658 | 1.00th=[ 51], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 69], 00:14:02.658 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 82], 60.00th=[ 91], 00:14:02.658 | 70.00th=[ 107], 80.00th=[ 131], 90.00th=[ 176], 95.00th=[ 209], 00:14:02.658 | 99.00th=[ 305], 99.50th=[ 527], 99.90th=[ 542], 99.95th=[ 542], 00:14:02.658 | 99.99th=[ 542] 00:14:02.658 bw ( KiB/s): min= 1795, max=14848, per=0.79%, avg=8620.21, stdev=3996.93, samples=19 00:14:02.658 iops : min= 14, max= 116, avg=67.16, stdev=31.39, samples=19 00:14:02.658 lat (msec) : 10=12.89%, 20=28.73%, 50=6.75%, 100=34.08%, 250=15.84% 00:14:02.658 lat (msec) : 500=1.40%, 750=0.31% 00:14:02.658 cpu : usr=0.65%, sys=0.35%, ctx=2050, majf=0, minf=7 00:14:02.658 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.658 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.658 issued rwts: total=640,648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.658 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.658 job82: (groupid=0, jobs=1): err= 0: pid=70536: Thu Jul 25 08:56:09 2024 00:14:02.658 read: IOPS=82, BW=10.4MiB/s (10.9MB/s)(80.0MiB/7712msec) 00:14:02.658 slat (usec): min=4, max=1739, avg=63.13, stdev=136.41 00:14:02.658 clat (usec): min=6796, max=48318, avg=14120.48, stdev=5348.06 00:14:02.658 lat (usec): min=6807, max=48323, avg=14183.61, stdev=5336.65 00:14:02.658 clat percentiles (usec): 00:14:02.658 | 1.00th=[ 7504], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10421], 00:14:02.658 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12518], 60.00th=[13304], 00:14:02.658 | 70.00th=[15270], 80.00th=[17695], 90.00th=[20317], 95.00th=[23987], 00:14:02.658 | 99.00th=[35390], 99.50th=[42730], 99.90th=[48497], 99.95th=[48497], 00:14:02.658 | 99.99th=[48497] 00:14:02.658 write: IOPS=76, BW=9805KiB/s (10.0MB/s)(85.4MiB/8916msec); 0 zone resets 00:14:02.658 slat (usec): min=41, max=6642, avg=217.27, stdev=517.38 00:14:02.658 clat (msec): min=23, max=445, avg=103.46, stdev=50.56 00:14:02.658 lat (msec): min=23, max=445, avg=103.68, stdev=50.58 00:14:02.658 clat percentiles (msec): 00:14:02.658 | 1.00th=[ 36], 5.00th=[ 62], 10.00th=[ 65], 20.00th=[ 69], 00:14:02.658 | 30.00th=[ 74], 40.00th=[ 81], 50.00th=[ 87], 60.00th=[ 97], 00:14:02.658 | 70.00th=[ 110], 80.00th=[ 129], 90.00th=[ 167], 95.00th=[ 192], 00:14:02.658 | 99.00th=[ 300], 99.50th=[ 342], 99.90th=[ 447], 99.95th=[ 447], 00:14:02.658 | 99.99th=[ 447] 00:14:02.658 bw ( KiB/s): min= 1788, max=14336, per=0.79%, avg=8651.10, stdev=3869.29, samples=20 00:14:02.658 iops : min= 13, max= 112, avg=67.35, stdev=30.48, samples=20 00:14:02.658 lat (msec) : 10=7.63%, 20=35.07%, 50=6.27%, 100=31.29%, 250=18.59% 00:14:02.658 lat (msec) : 500=1.13% 00:14:02.658 cpu : usr=0.69%, sys=0.32%, ctx=2270, majf=0, minf=5 00:14:02.658 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.658 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.658 issued rwts: total=640,683,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.658 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.658 job83: (groupid=0, jobs=1): err= 0: pid=70537: Thu Jul 25 08:56:09 2024 00:14:02.658 read: IOPS=85, BW=10.7MiB/s (11.2MB/s)(80.0MiB/7472msec) 00:14:02.658 slat (usec): min=4, max=1001, avg=56.09, stdev=104.61 00:14:02.658 clat (usec): min=5250, max=43004, avg=12581.00, stdev=5588.19 00:14:02.658 lat (usec): min=5271, max=43015, avg=12637.09, stdev=5590.04 00:14:02.658 clat percentiles (usec): 00:14:02.658 | 1.00th=[ 5604], 5.00th=[ 6456], 10.00th=[ 7242], 20.00th=[ 8586], 00:14:02.658 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[11076], 60.00th=[11863], 00:14:02.658 | 70.00th=[13698], 80.00th=[16319], 90.00th=[19006], 95.00th=[23462], 00:14:02.658 | 99.00th=[33817], 99.50th=[35390], 99.90th=[43254], 99.95th=[43254], 00:14:02.658 | 99.99th=[43254] 00:14:02.658 write: IOPS=76, BW=9855KiB/s (10.1MB/s)(87.0MiB/9040msec); 0 zone resets 00:14:02.658 slat (usec): min=39, max=3312, avg=161.46, stdev=250.47 00:14:02.658 clat (msec): min=55, max=351, avg=103.14, stdev=46.14 00:14:02.658 lat (msec): min=56, max=351, avg=103.30, stdev=46.15 00:14:02.658 clat percentiles (msec): 00:14:02.658 | 1.00th=[ 57], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 69], 00:14:02.658 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 86], 60.00th=[ 99], 00:14:02.658 | 70.00th=[ 115], 80.00th=[ 138], 90.00th=[ 163], 95.00th=[ 188], 00:14:02.658 | 99.00th=[ 292], 99.50th=[ 305], 99.90th=[ 351], 99.95th=[ 351], 00:14:02.658 | 99.99th=[ 351] 00:14:02.658 bw ( KiB/s): min= 3072, max=14848, per=0.81%, avg=8817.80, stdev=3661.53, samples=20 00:14:02.658 iops : min= 24, max= 116, avg=68.75, stdev=28.70, samples=20 00:14:02.658 lat (msec) : 10=18.94%, 20=25.07%, 50=3.89%, 100=31.81%, 250=19.54% 00:14:02.658 lat (msec) : 500=0.75% 00:14:02.658 cpu : usr=0.71%, sys=0.20%, ctx=2360, majf=0, minf=5 00:14:02.658 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.658 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.658 issued rwts: total=640,696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.658 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.658 job84: (groupid=0, jobs=1): err= 0: pid=70538: Thu Jul 25 08:56:09 2024 00:14:02.658 read: IOPS=63, BW=8109KiB/s (8303kB/s)(60.0MiB/7577msec) 00:14:02.658 slat (usec): min=6, max=1011, avg=66.37, stdev=114.23 00:14:02.658 clat (msec): min=4, max=328, avg=21.02, stdev=41.36 00:14:02.658 lat (msec): min=4, max=328, avg=21.08, stdev=41.35 00:14:02.658 clat percentiles (msec): 00:14:02.658 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:14:02.658 | 30.00th=[ 10], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 14], 00:14:02.658 | 70.00th=[ 15], 80.00th=[ 18], 90.00th=[ 23], 95.00th=[ 42], 00:14:02.658 | 99.00th=[ 317], 99.50th=[ 326], 99.90th=[ 330], 99.95th=[ 330], 00:14:02.658 | 99.99th=[ 330] 00:14:02.658 write: IOPS=70, BW=9079KiB/s (9297kB/s)(77.9MiB/8783msec); 0 zone resets 00:14:02.658 slat (usec): min=41, max=13957, avg=219.22, stdev=686.08 00:14:02.658 clat (msec): min=55, max=382, avg=111.91, stdev=52.71 00:14:02.658 lat (msec): min=55, max=382, avg=112.13, stdev=52.68 00:14:02.658 clat percentiles (msec): 00:14:02.658 | 1.00th=[ 57], 5.00th=[ 63], 10.00th=[ 66], 20.00th=[ 71], 00:14:02.658 | 30.00th=[ 77], 40.00th=[ 85], 50.00th=[ 94], 60.00th=[ 107], 00:14:02.658 | 70.00th=[ 127], 80.00th=[ 146], 90.00th=[ 180], 95.00th=[ 213], 00:14:02.658 | 99.00th=[ 309], 99.50th=[ 326], 99.90th=[ 384], 99.95th=[ 384], 00:14:02.658 | 99.99th=[ 384] 00:14:02.658 bw ( KiB/s): min= 3840, max=15360, per=0.76%, avg=8296.47, stdev=3585.75, samples=19 00:14:02.658 iops : min= 30, max= 120, avg=64.63, stdev=28.08, samples=19 00:14:02.658 lat (msec) : 10=13.24%, 20=23.39%, 50=4.81%, 100=30.92%, 250=25.48% 00:14:02.658 lat (msec) : 500=2.18% 00:14:02.658 cpu : usr=0.71%, sys=0.22%, ctx=1892, majf=0, minf=7 00:14:02.658 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.658 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.658 issued rwts: total=480,623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.658 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.658 job85: (groupid=0, jobs=1): err= 0: pid=70539: Thu Jul 25 08:56:09 2024 00:14:02.658 read: IOPS=81, BW=10.2MiB/s (10.7MB/s)(80.0MiB/7861msec) 00:14:02.658 slat (usec): min=4, max=4011, avg=57.82, stdev=229.92 00:14:02.658 clat (usec): min=5501, max=44650, avg=14932.02, stdev=6494.84 00:14:02.658 lat (usec): min=5514, max=44659, avg=14989.84, stdev=6513.01 00:14:02.658 clat percentiles (usec): 00:14:02.658 | 1.00th=[ 6587], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9896], 00:14:02.658 | 30.00th=[11076], 40.00th=[12125], 50.00th=[13042], 60.00th=[14222], 00:14:02.658 | 70.00th=[16188], 80.00th=[18220], 90.00th=[23725], 95.00th=[27132], 00:14:02.658 | 99.00th=[40633], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:14:02.658 | 99.99th=[44827] 00:14:02.658 write: IOPS=79, BW=9.89MiB/s (10.4MB/s)(87.6MiB/8856msec); 0 zone resets 00:14:02.658 slat (usec): min=34, max=3755, avg=159.72, stdev=228.12 00:14:02.658 clat (msec): min=22, max=556, avg=100.04, stdev=54.67 00:14:02.658 lat (msec): min=22, max=556, avg=100.20, stdev=54.67 00:14:02.658 clat percentiles (msec): 00:14:02.658 | 1.00th=[ 57], 5.00th=[ 60], 10.00th=[ 64], 20.00th=[ 69], 00:14:02.658 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 88], 00:14:02.658 | 70.00th=[ 99], 80.00th=[ 128], 90.00th=[ 163], 95.00th=[ 197], 00:14:02.658 | 99.00th=[ 305], 99.50th=[ 456], 99.90th=[ 558], 99.95th=[ 558], 00:14:02.658 | 99.99th=[ 558] 00:14:02.658 bw ( KiB/s): min= 1788, max=16128, per=0.81%, avg=8868.85, stdev=4360.81, samples=20 00:14:02.658 iops : min= 13, max= 126, avg=69.05, stdev=34.33, samples=20 00:14:02.658 lat (msec) : 10=9.77%, 20=30.28%, 50=7.90%, 100=36.84%, 250=14.32% 00:14:02.658 lat (msec) : 500=0.67%, 750=0.22% 00:14:02.658 cpu : usr=0.65%, sys=0.25%, ctx=2171, majf=0, minf=9 00:14:02.658 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.658 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.659 issued rwts: total=640,701,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.659 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.659 job86: (groupid=0, jobs=1): err= 0: pid=70540: Thu Jul 25 08:56:09 2024 00:14:02.659 read: IOPS=82, BW=10.3MiB/s (10.8MB/s)(80.0MiB/7776msec) 00:14:02.659 slat (usec): min=4, max=795, avg=48.70, stdev=92.00 00:14:02.659 clat (usec): min=5271, max=58374, avg=13512.22, stdev=7956.22 00:14:02.659 lat (usec): min=5289, max=58383, avg=13560.92, stdev=7956.39 00:14:02.659 clat percentiles (usec): 00:14:02.659 | 1.00th=[ 5473], 5.00th=[ 6259], 10.00th=[ 6849], 20.00th=[ 8094], 00:14:02.659 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[11469], 60.00th=[12780], 00:14:02.659 | 70.00th=[14091], 80.00th=[16057], 90.00th=[20055], 95.00th=[26346], 00:14:02.659 | 99.00th=[50070], 99.50th=[51643], 99.90th=[58459], 99.95th=[58459], 00:14:02.659 | 99.99th=[58459] 00:14:02.659 write: IOPS=80, BW=10.1MiB/s (10.6MB/s)(90.2MiB/8969msec); 0 zone resets 00:14:02.659 slat (usec): min=37, max=3834, avg=174.02, stdev=278.42 00:14:02.659 clat (msec): min=13, max=560, avg=98.23, stdev=54.73 00:14:02.659 lat (msec): min=13, max=561, avg=98.40, stdev=54.74 00:14:02.659 clat percentiles (msec): 00:14:02.659 | 1.00th=[ 23], 5.00th=[ 60], 10.00th=[ 63], 20.00th=[ 67], 00:14:02.659 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 88], 00:14:02.659 | 70.00th=[ 101], 80.00th=[ 122], 90.00th=[ 161], 95.00th=[ 188], 00:14:02.659 | 99.00th=[ 300], 99.50th=[ 464], 99.90th=[ 558], 99.95th=[ 558], 00:14:02.659 | 99.99th=[ 558] 00:14:02.659 bw ( KiB/s): min= 2043, max=14592, per=0.84%, avg=9149.00, stdev=4248.29, samples=20 00:14:02.659 iops : min= 15, max= 114, avg=71.25, stdev=33.46, samples=20 00:14:02.659 lat (msec) : 10=14.54%, 20=27.90%, 50=4.77%, 100=36.64%, 250=15.35% 00:14:02.659 lat (msec) : 500=0.66%, 750=0.15% 00:14:02.659 cpu : usr=0.59%, sys=0.35%, ctx=2330, majf=0, minf=1 00:14:02.659 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.659 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.659 issued rwts: total=640,722,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.659 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.659 job87: (groupid=0, jobs=1): err= 0: pid=70541: Thu Jul 25 08:56:09 2024 00:14:02.659 read: IOPS=73, BW=9457KiB/s (9684kB/s)(80.0MiB/8662msec) 00:14:02.659 slat (usec): min=5, max=2321, avg=67.69, stdev=158.31 00:14:02.659 clat (msec): min=4, max=380, avg=21.62, stdev=32.24 00:14:02.659 lat (msec): min=4, max=380, avg=21.69, stdev=32.25 00:14:02.659 clat percentiles (msec): 00:14:02.659 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:14:02.659 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 17], 00:14:02.659 | 70.00th=[ 19], 80.00th=[ 22], 90.00th=[ 31], 95.00th=[ 48], 00:14:02.659 | 99.00th=[ 188], 99.50th=[ 309], 99.90th=[ 380], 99.95th=[ 380], 00:14:02.659 | 99.99th=[ 380] 00:14:02.659 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(91.4MiB/8322msec); 0 zone resets 00:14:02.659 slat (usec): min=29, max=3976, avg=149.69, stdev=226.79 00:14:02.659 clat (msec): min=5, max=577, avg=90.17, stdev=54.31 00:14:02.659 lat (msec): min=5, max=577, avg=90.32, stdev=54.33 00:14:02.659 clat percentiles (msec): 00:14:02.659 | 1.00th=[ 8], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 68], 00:14:02.659 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 85], 00:14:02.659 | 70.00th=[ 91], 80.00th=[ 106], 90.00th=[ 123], 95.00th=[ 159], 00:14:02.659 | 99.00th=[ 313], 99.50th=[ 542], 99.90th=[ 575], 99.95th=[ 575], 00:14:02.659 | 99.99th=[ 575] 00:14:02.659 bw ( KiB/s): min= 255, max=17408, per=0.89%, avg=9741.00, stdev=4827.99, samples=19 00:14:02.659 iops : min= 1, max= 136, avg=75.84, stdev=37.77, samples=19 00:14:02.659 lat (msec) : 10=4.38%, 20=32.53%, 50=9.63%, 100=39.75%, 250=12.55% 00:14:02.659 lat (msec) : 500=0.80%, 750=0.36% 00:14:02.659 cpu : usr=0.62%, sys=0.29%, ctx=2376, majf=0, minf=1 00:14:02.659 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.659 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.659 issued rwts: total=640,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.659 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.659 job88: (groupid=0, jobs=1): err= 0: pid=70542: Thu Jul 25 08:56:09 2024 00:14:02.659 read: IOPS=66, BW=8462KiB/s (8665kB/s)(60.0MiB/7261msec) 00:14:02.659 slat (usec): min=4, max=1925, avg=54.02, stdev=131.91 00:14:02.659 clat (msec): min=4, max=221, avg=20.61, stdev=29.91 00:14:02.659 lat (msec): min=4, max=221, avg=20.66, stdev=29.91 00:14:02.659 clat percentiles (msec): 00:14:02.659 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 10], 00:14:02.659 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 15], 00:14:02.659 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 26], 95.00th=[ 66], 00:14:02.659 | 99.00th=[ 213], 99.50th=[ 218], 99.90th=[ 222], 99.95th=[ 222], 00:14:02.659 | 99.99th=[ 222] 00:14:02.659 write: IOPS=67, BW=8652KiB/s (8859kB/s)(74.4MiB/8803msec); 0 zone resets 00:14:02.659 slat (usec): min=38, max=4827, avg=182.71, stdev=290.35 00:14:02.659 clat (msec): min=57, max=513, avg=117.62, stdev=63.88 00:14:02.659 lat (msec): min=57, max=514, avg=117.80, stdev=63.87 00:14:02.659 clat percentiles (msec): 00:14:02.659 | 1.00th=[ 58], 5.00th=[ 61], 10.00th=[ 65], 20.00th=[ 74], 00:14:02.659 | 30.00th=[ 84], 40.00th=[ 90], 50.00th=[ 102], 60.00th=[ 111], 00:14:02.659 | 70.00th=[ 122], 80.00th=[ 144], 90.00th=[ 188], 95.00th=[ 253], 00:14:02.659 | 99.00th=[ 376], 99.50th=[ 451], 99.90th=[ 514], 99.95th=[ 514], 00:14:02.659 | 99.99th=[ 514] 00:14:02.659 bw ( KiB/s): min= 3328, max=13312, per=0.73%, avg=7920.42, stdev=3248.57, samples=19 00:14:02.659 iops : min= 26, max= 104, avg=61.74, stdev=25.40, samples=19 00:14:02.659 lat (msec) : 10=11.53%, 20=24.47%, 50=5.58%, 100=28.56%, 250=26.98% 00:14:02.659 lat (msec) : 500=2.79%, 750=0.09% 00:14:02.659 cpu : usr=0.61%, sys=0.14%, ctx=1906, majf=0, minf=3 00:14:02.659 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.659 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.659 issued rwts: total=480,595,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.659 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.659 job89: (groupid=0, jobs=1): err= 0: pid=70543: Thu Jul 25 08:56:09 2024 00:14:02.659 read: IOPS=79, BW=9.92MiB/s (10.4MB/s)(80.0MiB/8064msec) 00:14:02.659 slat (usec): min=5, max=1842, avg=48.35, stdev=127.21 00:14:02.659 clat (usec): min=4920, max=63039, avg=15665.20, stdev=9730.38 00:14:02.659 lat (usec): min=5056, max=63069, avg=15713.54, stdev=9734.58 00:14:02.659 clat percentiles (usec): 00:14:02.659 | 1.00th=[ 5407], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 8225], 00:14:02.659 | 30.00th=[10159], 40.00th=[11469], 50.00th=[12780], 60.00th=[14222], 00:14:02.659 | 70.00th=[17957], 80.00th=[20841], 90.00th=[27395], 95.00th=[35914], 00:14:02.659 | 99.00th=[51119], 99.50th=[54264], 99.90th=[63177], 99.95th=[63177], 00:14:02.659 | 99.99th=[63177] 00:14:02.659 write: IOPS=81, BW=10.2MiB/s (10.7MB/s)(89.4MiB/8788msec); 0 zone resets 00:14:02.659 slat (usec): min=34, max=4939, avg=177.85, stdev=297.52 00:14:02.659 clat (msec): min=56, max=431, avg=97.34, stdev=46.43 00:14:02.659 lat (msec): min=57, max=431, avg=97.51, stdev=46.45 00:14:02.659 clat percentiles (msec): 00:14:02.659 | 1.00th=[ 59], 5.00th=[ 63], 10.00th=[ 65], 20.00th=[ 69], 00:14:02.659 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 89], 00:14:02.659 | 70.00th=[ 99], 80.00th=[ 117], 90.00th=[ 150], 95.00th=[ 184], 00:14:02.659 | 99.00th=[ 279], 99.50th=[ 397], 99.90th=[ 430], 99.95th=[ 430], 00:14:02.659 | 99.99th=[ 430] 00:14:02.659 bw ( KiB/s): min= 3065, max=15104, per=0.87%, avg=9538.26, stdev=3843.11, samples=19 00:14:02.659 iops : min= 23, max= 118, avg=74.37, stdev=30.22, samples=19 00:14:02.659 lat (msec) : 10=13.87%, 20=22.36%, 50=10.41%, 100=38.15%, 250=14.46% 00:14:02.659 lat (msec) : 500=0.74% 00:14:02.659 cpu : usr=0.65%, sys=0.23%, ctx=2287, majf=0, minf=5 00:14:02.659 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.659 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.659 issued rwts: total=640,715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.659 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.659 job90: (groupid=0, jobs=1): err= 0: pid=70544: Thu Jul 25 08:56:09 2024 00:14:02.659 read: IOPS=78, BW=9.81MiB/s (10.3MB/s)(80.0MiB/8154msec) 00:14:02.659 slat (usec): min=4, max=3868, avg=86.01, stdev=275.33 00:14:02.659 clat (msec): min=5, max=135, avg=19.24, stdev=14.24 00:14:02.659 lat (msec): min=6, max=135, avg=19.33, stdev=14.26 00:14:02.659 clat percentiles (msec): 00:14:02.659 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:14:02.659 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 20], 00:14:02.659 | 70.00th=[ 22], 80.00th=[ 26], 90.00th=[ 31], 95.00th=[ 40], 00:14:02.659 | 99.00th=[ 109], 99.50th=[ 123], 99.90th=[ 136], 99.95th=[ 136], 00:14:02.659 | 99.99th=[ 136] 00:14:02.659 write: IOPS=77, BW=9953KiB/s (10.2MB/s)(82.6MiB/8501msec); 0 zone resets 00:14:02.659 slat (usec): min=32, max=2057, avg=185.59, stdev=233.54 00:14:02.659 clat (msec): min=56, max=478, avg=101.80, stdev=54.18 00:14:02.659 lat (msec): min=57, max=478, avg=101.99, stdev=54.16 00:14:02.659 clat percentiles (msec): 00:14:02.659 | 1.00th=[ 59], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 69], 00:14:02.659 | 30.00th=[ 74], 40.00th=[ 81], 50.00th=[ 87], 60.00th=[ 96], 00:14:02.659 | 70.00th=[ 105], 80.00th=[ 114], 90.00th=[ 153], 95.00th=[ 209], 00:14:02.659 | 99.00th=[ 351], 99.50th=[ 393], 99.90th=[ 477], 99.95th=[ 477], 00:14:02.659 | 99.99th=[ 477] 00:14:02.659 bw ( KiB/s): min= 1792, max=15104, per=0.85%, avg=9299.39, stdev=3857.74, samples=18 00:14:02.659 iops : min= 14, max= 118, avg=72.50, stdev=30.22, samples=18 00:14:02.659 lat (msec) : 10=9.15%, 20=21.45%, 50=17.52%, 100=33.51%, 250=16.76% 00:14:02.659 lat (msec) : 500=1.61% 00:14:02.659 cpu : usr=0.60%, sys=0.36%, ctx=2311, majf=0, minf=11 00:14:02.659 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.660 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.660 issued rwts: total=640,661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.660 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.660 job91: (groupid=0, jobs=1): err= 0: pid=70545: Thu Jul 25 08:56:09 2024 00:14:02.660 read: IOPS=76, BW=9854KiB/s (10.1MB/s)(80.0MiB/8313msec) 00:14:02.660 slat (usec): min=5, max=7491, avg=78.74, stdev=346.76 00:14:02.660 clat (usec): min=6270, max=70311, avg=17716.50, stdev=9422.59 00:14:02.660 lat (usec): min=6292, max=70334, avg=17795.24, stdev=9464.84 00:14:02.660 clat percentiles (usec): 00:14:02.660 | 1.00th=[ 7767], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9765], 00:14:02.660 | 30.00th=[10814], 40.00th=[12911], 50.00th=[15139], 60.00th=[17957], 00:14:02.660 | 70.00th=[20579], 80.00th=[25560], 90.00th=[28705], 95.00th=[33424], 00:14:02.660 | 99.00th=[49021], 99.50th=[55313], 99.90th=[70779], 99.95th=[70779], 00:14:02.660 | 99.99th=[70779] 00:14:02.660 write: IOPS=81, BW=10.2MiB/s (10.7MB/s)(87.8MiB/8627msec); 0 zone resets 00:14:02.660 slat (usec): min=36, max=4450, avg=171.86, stdev=266.60 00:14:02.660 clat (msec): min=21, max=343, avg=97.40, stdev=38.37 00:14:02.660 lat (msec): min=21, max=343, avg=97.57, stdev=38.39 00:14:02.660 clat percentiles (msec): 00:14:02.660 | 1.00th=[ 37], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 71], 00:14:02.660 | 30.00th=[ 77], 40.00th=[ 80], 50.00th=[ 86], 60.00th=[ 93], 00:14:02.660 | 70.00th=[ 103], 80.00th=[ 121], 90.00th=[ 153], 95.00th=[ 180], 00:14:02.660 | 99.00th=[ 226], 99.50th=[ 241], 99.90th=[ 342], 99.95th=[ 342], 00:14:02.660 | 99.99th=[ 342] 00:14:02.660 bw ( KiB/s): min= 1536, max=16128, per=0.81%, avg=8880.40, stdev=4322.09, samples=20 00:14:02.660 iops : min= 12, max= 126, avg=69.30, stdev=33.67, samples=20 00:14:02.660 lat (msec) : 10=11.10%, 20=21.39%, 50=15.35%, 100=35.62%, 250=16.32% 00:14:02.660 lat (msec) : 500=0.22% 00:14:02.660 cpu : usr=0.67%, sys=0.31%, ctx=2271, majf=0, minf=1 00:14:02.660 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.660 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.660 issued rwts: total=640,702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.660 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.660 job92: (groupid=0, jobs=1): err= 0: pid=70546: Thu Jul 25 08:56:09 2024 00:14:02.660 read: IOPS=76, BW=9843KiB/s (10.1MB/s)(80.0MiB/8323msec) 00:14:02.660 slat (usec): min=4, max=816, avg=61.64, stdev=119.20 00:14:02.660 clat (usec): min=7318, max=70741, avg=16827.83, stdev=9196.74 00:14:02.660 lat (usec): min=7329, max=70752, avg=16889.47, stdev=9207.72 00:14:02.660 clat percentiles (usec): 00:14:02.660 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10159], 00:14:02.660 | 30.00th=[11863], 40.00th=[13042], 50.00th=[14091], 60.00th=[15401], 00:14:02.660 | 70.00th=[17695], 80.00th=[20579], 90.00th=[27657], 95.00th=[34866], 00:14:02.660 | 99.00th=[56886], 99.50th=[63701], 99.90th=[70779], 99.95th=[70779], 00:14:02.660 | 99.99th=[70779] 00:14:02.660 write: IOPS=80, BW=10.0MiB/s (10.5MB/s)(87.1MiB/8677msec); 0 zone resets 00:14:02.660 slat (usec): min=33, max=4417, avg=201.38, stdev=389.00 00:14:02.660 clat (msec): min=23, max=302, avg=98.62, stdev=39.99 00:14:02.660 lat (msec): min=23, max=302, avg=98.82, stdev=39.99 00:14:02.660 clat percentiles (msec): 00:14:02.660 | 1.00th=[ 36], 5.00th=[ 62], 10.00th=[ 65], 20.00th=[ 71], 00:14:02.660 | 30.00th=[ 75], 40.00th=[ 81], 50.00th=[ 88], 60.00th=[ 95], 00:14:02.660 | 70.00th=[ 106], 80.00th=[ 116], 90.00th=[ 148], 95.00th=[ 186], 00:14:02.660 | 99.00th=[ 262], 99.50th=[ 271], 99.90th=[ 305], 99.95th=[ 305], 00:14:02.660 | 99.99th=[ 305] 00:14:02.660 bw ( KiB/s): min= 1792, max=14080, per=0.81%, avg=8829.20, stdev=4359.10, samples=20 00:14:02.660 iops : min= 14, max= 110, avg=68.90, stdev=33.97, samples=20 00:14:02.660 lat (msec) : 10=8.75%, 20=28.27%, 50=10.62%, 100=34.70%, 250=16.90% 00:14:02.660 lat (msec) : 500=0.75% 00:14:02.660 cpu : usr=0.58%, sys=0.35%, ctx=2323, majf=0, minf=7 00:14:02.660 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.660 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.660 issued rwts: total=640,697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.660 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.660 job93: (groupid=0, jobs=1): err= 0: pid=70547: Thu Jul 25 08:56:09 2024 00:14:02.660 read: IOPS=68, BW=8786KiB/s (8997kB/s)(60.0MiB/6993msec) 00:14:02.660 slat (usec): min=5, max=3536, avg=70.70, stdev=195.18 00:14:02.660 clat (msec): min=3, max=303, avg=24.10, stdev=51.37 00:14:02.660 lat (msec): min=3, max=303, avg=24.17, stdev=51.39 00:14:02.660 clat percentiles (msec): 00:14:02.660 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:14:02.660 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 12], 00:14:02.660 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 27], 95.00th=[ 126], 00:14:02.660 | 99.00th=[ 288], 99.50th=[ 292], 99.90th=[ 305], 99.95th=[ 305], 00:14:02.660 | 99.99th=[ 305] 00:14:02.660 write: IOPS=66, BW=8542KiB/s (8747kB/s)(71.8MiB/8601msec); 0 zone resets 00:14:02.660 slat (usec): min=43, max=4975, avg=180.49, stdev=297.88 00:14:02.660 clat (msec): min=55, max=408, avg=119.22, stdev=47.33 00:14:02.660 lat (msec): min=56, max=408, avg=119.40, stdev=47.31 00:14:02.660 clat percentiles (msec): 00:14:02.660 | 1.00th=[ 58], 5.00th=[ 66], 10.00th=[ 70], 20.00th=[ 77], 00:14:02.660 | 30.00th=[ 89], 40.00th=[ 102], 50.00th=[ 112], 60.00th=[ 123], 00:14:02.660 | 70.00th=[ 138], 80.00th=[ 153], 90.00th=[ 182], 95.00th=[ 203], 00:14:02.660 | 99.00th=[ 262], 99.50th=[ 342], 99.90th=[ 409], 99.95th=[ 409], 00:14:02.660 | 99.99th=[ 409] 00:14:02.660 bw ( KiB/s): min= 1280, max=14080, per=0.66%, avg=7243.45, stdev=3766.00, samples=20 00:14:02.660 iops : min= 10, max= 110, avg=56.45, stdev=29.45, samples=20 00:14:02.660 lat (msec) : 4=0.09%, 10=22.11%, 20=16.70%, 50=3.51%, 100=21.92% 00:14:02.660 lat (msec) : 250=33.68%, 500=1.99% 00:14:02.660 cpu : usr=0.46%, sys=0.24%, ctx=1871, majf=0, minf=9 00:14:02.660 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.660 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.660 issued rwts: total=480,574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.660 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.660 job94: (groupid=0, jobs=1): err= 0: pid=70548: Thu Jul 25 08:56:09 2024 00:14:02.660 read: IOPS=61, BW=7929KiB/s (8119kB/s)(60.0MiB/7749msec) 00:14:02.660 slat (usec): min=4, max=1122, avg=58.32, stdev=98.10 00:14:02.660 clat (msec): min=4, max=228, avg=24.09, stdev=34.21 00:14:02.660 lat (msec): min=4, max=228, avg=24.15, stdev=34.21 00:14:02.660 clat percentiles (msec): 00:14:02.660 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:14:02.660 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 17], 00:14:02.660 | 70.00th=[ 20], 80.00th=[ 24], 90.00th=[ 40], 95.00th=[ 68], 00:14:02.660 | 99.00th=[ 222], 99.50th=[ 224], 99.90th=[ 230], 99.95th=[ 230], 00:14:02.660 | 99.99th=[ 230] 00:14:02.660 write: IOPS=67, BW=8696KiB/s (8905kB/s)(73.0MiB/8596msec); 0 zone resets 00:14:02.660 slat (usec): min=35, max=6574, avg=208.56, stdev=353.64 00:14:02.660 clat (msec): min=58, max=360, avg=116.72, stdev=58.14 00:14:02.660 lat (msec): min=58, max=360, avg=116.92, stdev=58.13 00:14:02.660 clat percentiles (msec): 00:14:02.660 | 1.00th=[ 60], 5.00th=[ 64], 10.00th=[ 68], 20.00th=[ 74], 00:14:02.660 | 30.00th=[ 81], 40.00th=[ 93], 50.00th=[ 104], 60.00th=[ 113], 00:14:02.660 | 70.00th=[ 123], 80.00th=[ 138], 90.00th=[ 188], 95.00th=[ 257], 00:14:02.660 | 99.00th=[ 342], 99.50th=[ 355], 99.90th=[ 359], 99.95th=[ 359], 00:14:02.660 | 99.99th=[ 359] 00:14:02.660 bw ( KiB/s): min= 256, max=14080, per=0.68%, avg=7382.05, stdev=4053.31, samples=20 00:14:02.660 iops : min= 2, max= 110, avg=57.50, stdev=31.69, samples=20 00:14:02.660 lat (msec) : 10=10.81%, 20=21.80%, 50=9.02%, 100=27.63%, 250=27.91% 00:14:02.660 lat (msec) : 500=2.82% 00:14:02.660 cpu : usr=0.59%, sys=0.23%, ctx=1912, majf=0, minf=3 00:14:02.660 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.660 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.660 issued rwts: total=480,584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.660 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.660 job95: (groupid=0, jobs=1): err= 0: pid=70549: Thu Jul 25 08:56:09 2024 00:14:02.661 read: IOPS=71, BW=9099KiB/s (9318kB/s)(65.2MiB/7343msec) 00:14:02.661 slat (usec): min=4, max=1059, avg=48.49, stdev=97.91 00:14:02.661 clat (usec): min=3970, max=76436, avg=15335.77, stdev=9910.08 00:14:02.661 lat (usec): min=3998, max=76442, avg=15384.26, stdev=9920.47 00:14:02.661 clat percentiles (usec): 00:14:02.661 | 1.00th=[ 5473], 5.00th=[ 6194], 10.00th=[ 6718], 20.00th=[ 8717], 00:14:02.661 | 30.00th=[10290], 40.00th=[11600], 50.00th=[12649], 60.00th=[13698], 00:14:02.661 | 70.00th=[15401], 80.00th=[20841], 90.00th=[26608], 95.00th=[32113], 00:14:02.661 | 99.00th=[61604], 99.50th=[68682], 99.90th=[76022], 99.95th=[76022], 00:14:02.661 | 99.99th=[76022] 00:14:02.661 write: IOPS=71, BW=9108KiB/s (9327kB/s)(80.0MiB/8994msec); 0 zone resets 00:14:02.661 slat (usec): min=41, max=3989, avg=201.82, stdev=311.41 00:14:02.661 clat (msec): min=57, max=324, avg=111.46, stdev=41.33 00:14:02.661 lat (msec): min=59, max=324, avg=111.66, stdev=41.34 00:14:02.661 clat percentiles (msec): 00:14:02.661 | 1.00th=[ 59], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 75], 00:14:02.661 | 30.00th=[ 87], 40.00th=[ 97], 50.00th=[ 107], 60.00th=[ 113], 00:14:02.661 | 70.00th=[ 124], 80.00th=[ 140], 90.00th=[ 163], 95.00th=[ 188], 00:14:02.661 | 99.00th=[ 247], 99.50th=[ 284], 99.90th=[ 326], 99.95th=[ 326], 00:14:02.661 | 99.99th=[ 326] 00:14:02.661 bw ( KiB/s): min= 2304, max=13056, per=0.75%, avg=8177.42, stdev=3213.53, samples=19 00:14:02.661 iops : min= 18, max= 102, avg=63.74, stdev=25.11, samples=19 00:14:02.661 lat (msec) : 4=0.09%, 10=12.31%, 20=22.98%, 50=8.95%, 100=24.70% 00:14:02.661 lat (msec) : 250=30.46%, 500=0.52% 00:14:02.661 cpu : usr=0.53%, sys=0.22%, ctx=2160, majf=0, minf=3 00:14:02.661 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.661 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.661 issued rwts: total=522,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.661 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.661 job96: (groupid=0, jobs=1): err= 0: pid=70550: Thu Jul 25 08:56:09 2024 00:14:02.661 read: IOPS=77, BW=9922KiB/s (10.2MB/s)(80.0MiB/8256msec) 00:14:02.661 slat (usec): min=5, max=4897, avg=88.18, stdev=339.81 00:14:02.661 clat (usec): min=6827, max=50079, avg=17226.70, stdev=8795.03 00:14:02.661 lat (usec): min=6961, max=50086, avg=17314.88, stdev=8793.69 00:14:02.661 clat percentiles (usec): 00:14:02.661 | 1.00th=[ 7046], 5.00th=[ 7635], 10.00th=[ 8160], 20.00th=[ 9241], 00:14:02.661 | 30.00th=[11207], 40.00th=[13304], 50.00th=[15270], 60.00th=[17433], 00:14:02.661 | 70.00th=[20055], 80.00th=[22676], 90.00th=[28967], 95.00th=[36439], 00:14:02.661 | 99.00th=[45351], 99.50th=[46924], 99.90th=[50070], 99.95th=[50070], 00:14:02.661 | 99.99th=[50070] 00:14:02.661 write: IOPS=81, BW=10.2MiB/s (10.7MB/s)(88.1MiB/8657msec); 0 zone resets 00:14:02.661 slat (usec): min=38, max=9647, avg=210.25, stdev=554.98 00:14:02.661 clat (msec): min=41, max=306, avg=97.19, stdev=40.03 00:14:02.661 lat (msec): min=41, max=306, avg=97.40, stdev=40.06 00:14:02.661 clat percentiles (msec): 00:14:02.661 | 1.00th=[ 57], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 67], 00:14:02.661 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 84], 60.00th=[ 93], 00:14:02.661 | 70.00th=[ 104], 80.00th=[ 122], 90.00th=[ 150], 95.00th=[ 176], 00:14:02.661 | 99.00th=[ 253], 99.50th=[ 275], 99.90th=[ 309], 99.95th=[ 309], 00:14:02.661 | 99.99th=[ 309] 00:14:02.661 bw ( KiB/s): min= 3072, max=15872, per=0.86%, avg=9403.21, stdev=3922.09, samples=19 00:14:02.661 iops : min= 24, max= 124, avg=73.26, stdev=30.86, samples=19 00:14:02.661 lat (msec) : 10=11.97%, 20=21.49%, 50=14.42%, 100=35.17%, 250=16.36% 00:14:02.661 lat (msec) : 500=0.59% 00:14:02.661 cpu : usr=0.66%, sys=0.26%, ctx=2403, majf=0, minf=3 00:14:02.661 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.661 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.661 issued rwts: total=640,705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.661 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.661 job97: (groupid=0, jobs=1): err= 0: pid=70551: Thu Jul 25 08:56:09 2024 00:14:02.661 read: IOPS=79, BW=9.91MiB/s (10.4MB/s)(80.0MiB/8075msec) 00:14:02.661 slat (usec): min=4, max=2351, avg=74.22, stdev=166.88 00:14:02.661 clat (msec): min=5, max=226, avg=19.80, stdev=20.74 00:14:02.661 lat (msec): min=5, max=226, avg=19.87, stdev=20.75 00:14:02.661 clat percentiles (msec): 00:14:02.661 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 13], 00:14:02.661 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 18], 00:14:02.661 | 70.00th=[ 20], 80.00th=[ 22], 90.00th=[ 26], 95.00th=[ 36], 00:14:02.661 | 99.00th=[ 150], 99.50th=[ 201], 99.90th=[ 226], 99.95th=[ 226], 00:14:02.661 | 99.99th=[ 226] 00:14:02.661 write: IOPS=76, BW=9806KiB/s (10.0MB/s)(81.4MiB/8498msec); 0 zone resets 00:14:02.661 slat (usec): min=41, max=6433, avg=265.70, stdev=564.48 00:14:02.661 clat (msec): min=12, max=351, avg=103.17, stdev=47.51 00:14:02.661 lat (msec): min=12, max=351, avg=103.43, stdev=47.51 00:14:02.661 clat percentiles (msec): 00:14:02.661 | 1.00th=[ 18], 5.00th=[ 60], 10.00th=[ 63], 20.00th=[ 69], 00:14:02.661 | 30.00th=[ 77], 40.00th=[ 85], 50.00th=[ 93], 60.00th=[ 104], 00:14:02.661 | 70.00th=[ 114], 80.00th=[ 129], 90.00th=[ 148], 95.00th=[ 209], 00:14:02.661 | 99.00th=[ 288], 99.50th=[ 309], 99.90th=[ 351], 99.95th=[ 351], 00:14:02.661 | 99.99th=[ 351] 00:14:02.661 bw ( KiB/s): min= 1788, max=15872, per=0.75%, avg=8210.10, stdev=4437.20, samples=20 00:14:02.661 iops : min= 13, max= 124, avg=63.90, stdev=34.66, samples=20 00:14:02.661 lat (msec) : 10=5.11%, 20=31.53%, 50=13.32%, 100=26.65%, 250=22.23% 00:14:02.661 lat (msec) : 500=1.16% 00:14:02.661 cpu : usr=0.72%, sys=0.30%, ctx=2188, majf=0, minf=7 00:14:02.661 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.661 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.661 issued rwts: total=640,651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.661 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.661 job98: (groupid=0, jobs=1): err= 0: pid=70552: Thu Jul 25 08:56:09 2024 00:14:02.661 read: IOPS=75, BW=9603KiB/s (9833kB/s)(80.0MiB/8531msec) 00:14:02.661 slat (usec): min=5, max=2892, avg=65.41, stdev=230.05 00:14:02.661 clat (usec): min=3366, max=67670, avg=13837.17, stdev=8941.43 00:14:02.661 lat (usec): min=5398, max=67685, avg=13902.58, stdev=8931.42 00:14:02.661 clat percentiles (usec): 00:14:02.661 | 1.00th=[ 6128], 5.00th=[ 6849], 10.00th=[ 7242], 20.00th=[ 7963], 00:14:02.661 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[11207], 60.00th=[12518], 00:14:02.661 | 70.00th=[13698], 80.00th=[17957], 90.00th=[24249], 95.00th=[31851], 00:14:02.661 | 99.00th=[53740], 99.50th=[58459], 99.90th=[67634], 99.95th=[67634], 00:14:02.661 | 99.99th=[67634] 00:14:02.661 write: IOPS=80, BW=10.1MiB/s (10.6MB/s)(90.2MiB/8958msec); 0 zone resets 00:14:02.661 slat (usec): min=34, max=6036, avg=199.49, stdev=352.13 00:14:02.661 clat (usec): min=1957, max=282536, avg=98370.58, stdev=42381.55 00:14:02.661 lat (msec): min=2, max=282, avg=98.57, stdev=42.40 00:14:02.661 clat percentiles (msec): 00:14:02.661 | 1.00th=[ 7], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 68], 00:14:02.661 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 86], 60.00th=[ 100], 00:14:02.661 | 70.00th=[ 117], 80.00th=[ 130], 90.00th=[ 157], 95.00th=[ 178], 00:14:02.661 | 99.00th=[ 228], 99.50th=[ 251], 99.90th=[ 284], 99.95th=[ 284], 00:14:02.661 | 99.99th=[ 284] 00:14:02.661 bw ( KiB/s): min= 1795, max=17152, per=0.84%, avg=9148.70, stdev=4358.68, samples=20 00:14:02.661 iops : min= 14, max= 134, avg=71.40, stdev=34.09, samples=20 00:14:02.661 lat (msec) : 2=0.07%, 4=0.22%, 10=21.15%, 20=20.34%, 50=6.75% 00:14:02.661 lat (msec) : 100=30.32%, 250=20.85%, 500=0.29% 00:14:02.661 cpu : usr=0.66%, sys=0.23%, ctx=2370, majf=0, minf=9 00:14:02.661 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.661 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.661 issued rwts: total=640,722,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.661 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.661 job99: (groupid=0, jobs=1): err= 0: pid=70553: Thu Jul 25 08:56:09 2024 00:14:02.661 read: IOPS=78, BW=9.85MiB/s (10.3MB/s)(80.0MiB/8123msec) 00:14:02.661 slat (usec): min=5, max=4872, avg=88.06, stdev=276.37 00:14:02.661 clat (usec): min=7799, max=61815, avg=18789.54, stdev=8701.69 00:14:02.661 lat (usec): min=7891, max=61823, avg=18877.61, stdev=8694.14 00:14:02.661 clat percentiles (usec): 00:14:02.661 | 1.00th=[ 8029], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[11469], 00:14:02.661 | 30.00th=[13435], 40.00th=[14615], 50.00th=[16712], 60.00th=[19530], 00:14:02.661 | 70.00th=[21890], 80.00th=[24773], 90.00th=[28705], 95.00th=[34341], 00:14:02.661 | 99.00th=[49021], 99.50th=[49546], 99.90th=[61604], 99.95th=[61604], 00:14:02.661 | 99.99th=[61604] 00:14:02.661 write: IOPS=79, BW=9.92MiB/s (10.4MB/s)(84.8MiB/8540msec); 0 zone resets 00:14:02.661 slat (usec): min=44, max=3615, avg=170.64, stdev=241.37 00:14:02.661 clat (msec): min=44, max=479, avg=99.68, stdev=57.24 00:14:02.661 lat (msec): min=44, max=479, avg=99.85, stdev=57.24 00:14:02.661 clat percentiles (msec): 00:14:02.661 | 1.00th=[ 58], 5.00th=[ 60], 10.00th=[ 62], 20.00th=[ 68], 00:14:02.661 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 90], 00:14:02.661 | 70.00th=[ 101], 80.00th=[ 112], 90.00th=[ 142], 95.00th=[ 234], 00:14:02.661 | 99.00th=[ 401], 99.50th=[ 409], 99.90th=[ 481], 99.95th=[ 481], 00:14:02.661 | 99.99th=[ 481] 00:14:02.661 bw ( KiB/s): min= 1788, max=15616, per=0.87%, avg=9542.00, stdev=4212.66, samples=18 00:14:02.661 iops : min= 13, max= 122, avg=74.39, stdev=33.07, samples=18 00:14:02.661 lat (msec) : 10=5.84%, 20=24.66%, 50=17.98%, 100=36.12%, 250=13.35% 00:14:02.661 lat (msec) : 500=2.05% 00:14:02.661 cpu : usr=0.64%, sys=0.29%, ctx=2384, majf=0, minf=1 00:14:02.662 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.662 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.662 issued rwts: total=640,678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.662 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:02.662 00:14:02.662 Run status group 0 (all jobs): 00:14:02.662 READ: bw=998MiB/s (1046MB/s), 7929KiB/s-14.7MiB/s (8119kB/s-15.4MB/s), io=8981MiB (9417MB), run=6993-9003msec 00:14:02.662 WRITE: bw=1066MiB/s (1118MB/s), 7970KiB/s-15.3MiB/s (8162kB/s-16.1MB/s), io=9825MiB (10.3GB), run=7975-9218msec 00:14:02.662 00:14:02.662 Disk stats (read/write): 00:14:02.662 sdc: ios=675/714, merge=0/0, ticks=11235/66004, in_queue=77239, util=79.80% 00:14:02.662 sdf: ios=675/709, merge=0/0, ticks=11198/65662, in_queue=76861, util=80.02% 00:14:02.662 sdh: ios=521/640, merge=0/0, ticks=10196/64713, in_queue=74910, util=80.02% 00:14:02.662 sdj: ios=675/652, merge=0/0, ticks=13531/63469, in_queue=77001, util=80.62% 00:14:02.662 sdk: ios=676/738, merge=0/0, ticks=8772/69069, in_queue=77842, util=81.08% 00:14:02.662 sdo: ios=676/732, merge=0/0, ticks=10326/67058, in_queue=77384, util=80.69% 00:14:02.662 sdr: ios=675/701, merge=0/0, ticks=10503/66594, in_queue=77098, util=81.23% 00:14:02.662 sdu: ios=480/603, merge=0/0, ticks=11199/67052, in_queue=78251, util=81.07% 00:14:02.662 sdw: ios=660/749, merge=0/0, ticks=8742/69392, in_queue=78135, util=81.56% 00:14:02.662 sdaa: ios=480/597, merge=0/0, ticks=12243/65731, in_queue=77975, util=81.27% 00:14:02.662 sdg: ios=801/843, merge=0/0, ticks=9898/67962, in_queue=77860, util=81.53% 00:14:02.662 sdl: ios=1000/977, merge=0/0, ticks=15000/61165, in_queue=76166, util=81.52% 00:14:02.662 sdn: ios=997/991, merge=0/0, ticks=11023/67011, in_queue=78034, util=82.51% 00:14:02.662 sdq: ios=805/960, merge=0/0, ticks=10682/65858, in_queue=76541, util=82.17% 00:14:02.662 sdt: ios=802/888, merge=0/0, ticks=10634/65814, in_queue=76449, util=82.87% 00:14:02.662 sdv: ios=998/991, merge=0/0, ticks=11206/66028, in_queue=77235, util=83.10% 00:14:02.662 sdy: ios=907/960, merge=0/0, ticks=13704/63201, in_queue=76906, util=82.86% 00:14:02.662 sdz: ios=999/984, merge=0/0, ticks=11758/66295, in_queue=78053, util=83.75% 00:14:02.662 sdac: ios=803/927, merge=0/0, ticks=11205/65645, in_queue=76851, util=83.48% 00:14:02.662 sdae: ios=996/960, merge=0/0, ticks=15267/61793, in_queue=77060, util=83.39% 00:14:02.662 sdad: ios=821/960, merge=0/0, ticks=10892/64737, in_queue=75630, util=83.43% 00:14:02.662 sdaf: ios=962/964, merge=0/0, ticks=12947/63657, in_queue=76605, util=84.16% 00:14:02.662 sdah: ios=801/846, merge=0/0, ticks=11003/64908, in_queue=75911, util=84.39% 00:14:02.662 sdaj: ios=997/996, merge=0/0, ticks=10368/67934, in_queue=78302, util=84.87% 00:14:02.662 sdal: ios=801/881, merge=0/0, ticks=11177/65644, in_queue=76821, util=84.84% 00:14:02.662 sdan: ios=995/1009, merge=0/0, ticks=11435/66450, in_queue=77886, util=85.15% 00:14:02.662 sdap: ios=812/960, merge=0/0, ticks=8656/67938, in_queue=76595, util=84.74% 00:14:02.662 sdar: ios=962/1000, merge=0/0, ticks=10618/66457, in_queue=77076, util=85.39% 00:14:02.662 sdat: ios=802/952, merge=0/0, ticks=10324/64968, in_queue=75292, util=85.44% 00:14:02.662 sdaw: ios=802/883, merge=0/0, ticks=15472/61967, in_queue=77439, util=85.93% 00:14:02.662 sdag: ios=641/701, merge=0/0, ticks=10706/66484, in_queue=77191, util=85.99% 00:14:02.662 sdai: ios=480/592, merge=0/0, ticks=9794/67762, in_queue=77557, util=86.05% 00:14:02.662 sdak: ios=641/684, merge=0/0, ticks=12964/64547, in_queue=77512, util=86.25% 00:14:02.662 sdam: ios=680/714, merge=0/0, ticks=8705/69904, in_queue=78610, util=86.13% 00:14:02.662 sdao: ios=640/643, merge=0/0, ticks=10859/64983, in_queue=75842, util=86.46% 00:14:02.662 sdaq: ios=641/654, merge=0/0, ticks=16244/60581, in_queue=76825, util=86.42% 00:14:02.662 sdas: ios=641/654, merge=0/0, ticks=11887/64149, in_queue=76036, util=86.68% 00:14:02.662 sdau: ios=642/713, merge=0/0, ticks=7283/70830, in_queue=78113, util=86.63% 00:14:02.662 sdav: ios=480/582, merge=0/0, ticks=11211/66915, in_queue=78126, util=86.61% 00:14:02.662 sdax: ios=504/640, merge=0/0, ticks=9948/66330, in_queue=76278, util=86.39% 00:14:02.662 sday: ios=480/598, merge=0/0, ticks=7562/70784, in_queue=78346, util=86.99% 00:14:02.662 sdaz: ios=480/595, merge=0/0, ticks=7991/70234, in_queue=78226, util=87.12% 00:14:02.662 sdba: ios=642/753, merge=0/0, ticks=8307/69650, in_queue=77958, util=87.52% 00:14:02.662 sdbb: ios=641/731, merge=0/0, ticks=11677/65333, in_queue=77010, util=87.40% 00:14:02.662 sdbd: ios=642/727, merge=0/0, ticks=10672/66443, in_queue=77115, util=87.57% 00:14:02.662 sdbe: ios=641/660, merge=0/0, ticks=13135/63435, in_queue=76571, util=87.79% 00:14:02.662 sdbh: ios=640/666, merge=0/0, ticks=12567/63783, in_queue=76351, util=88.19% 00:14:02.662 sdbj: ios=490/640, merge=0/0, ticks=9530/68071, in_queue=77602, util=88.24% 00:14:02.662 sdbl: ios=641/695, merge=0/0, ticks=12259/64720, in_queue=76979, util=88.45% 00:14:02.662 sdbr: ios=641/713, merge=0/0, ticks=11505/65117, in_queue=76622, util=88.71% 00:14:02.662 sdbc: ios=1000/963, merge=0/0, ticks=14098/62442, in_queue=76541, util=88.64% 00:14:02.662 sdbf: ios=998/992, merge=0/0, ticks=15346/62383, in_queue=77730, util=89.28% 00:14:02.662 sdbg: ios=962/1012, merge=0/0, ticks=12585/63725, in_queue=76310, util=89.15% 00:14:02.662 sdbi: ios=962/970, merge=0/0, ticks=14255/61643, in_queue=75899, util=89.02% 00:14:02.662 sdbk: ios=802/894, merge=0/0, ticks=9543/67304, in_queue=76848, util=88.46% 00:14:02.662 sdbm: ios=956/960, merge=0/0, ticks=13273/63035, in_queue=76308, util=89.32% 00:14:02.662 sdbn: ios=962/977, merge=0/0, ticks=14419/61379, in_queue=75798, util=89.54% 00:14:02.662 sdbo: ios=802/959, merge=0/0, ticks=9639/66366, in_queue=76005, util=89.93% 00:14:02.662 sdbp: ios=801/859, merge=0/0, ticks=8123/69617, in_queue=77741, util=90.21% 00:14:02.662 sdbu: ios=801/847, merge=0/0, ticks=10700/66872, in_queue=77573, util=90.34% 00:14:02.662 sdbq: ios=802/928, merge=0/0, ticks=11794/64753, in_queue=76547, util=90.29% 00:14:02.662 sdbs: ios=961/980, merge=0/0, ticks=11014/66140, in_queue=77154, util=90.70% 00:14:02.662 sdbt: ios=801/903, merge=0/0, ticks=8654/69125, in_queue=77780, util=90.62% 00:14:02.662 sdbv: ios=961/964, merge=0/0, ticks=11386/65023, in_queue=76410, util=90.97% 00:14:02.662 sdbw: ios=801/913, merge=0/0, ticks=10026/66749, in_queue=76775, util=91.27% 00:14:02.662 sdbx: ios=962/964, merge=0/0, ticks=12346/63621, in_queue=75967, util=91.38% 00:14:02.662 sdby: ios=962/982, merge=0/0, ticks=11606/64960, in_queue=76566, util=91.89% 00:14:02.662 sdbz: ios=801/839, merge=0/0, ticks=13457/63920, in_queue=77378, util=91.74% 00:14:02.662 sdca: ios=961/960, merge=0/0, ticks=10884/66161, in_queue=77045, util=92.19% 00:14:02.662 sdci: ios=962/993, merge=0/0, ticks=12032/65067, in_queue=77100, util=91.84% 00:14:02.662 sdcc: ios=641/676, merge=0/0, ticks=13423/62995, in_queue=76418, util=92.35% 00:14:02.662 sdcd: ios=624/640, merge=0/0, ticks=6925/70664, in_queue=77589, util=92.56% 00:14:02.662 sdcg: ios=641/679, merge=0/0, ticks=13441/62974, in_queue=76415, util=92.64% 00:14:02.662 sdck: ios=480/564, merge=0/0, ticks=6485/71832, in_queue=78317, util=92.56% 00:14:02.662 sdcn: ios=640/643, merge=0/0, ticks=8568/67634, in_queue=76202, util=92.92% 00:14:02.662 sdcp: ios=641/675, merge=0/0, ticks=10414/66120, in_queue=76534, util=92.87% 00:14:02.662 sdcq: ios=642/701, merge=0/0, ticks=12545/64542, in_queue=77088, util=93.11% 00:14:02.662 sdcs: ios=480/591, merge=0/0, ticks=9697/67454, in_queue=77151, util=93.31% 00:14:02.662 sdct: ios=640/660, merge=0/0, ticks=9908/66759, in_queue=76667, util=93.29% 00:14:02.662 sdcv: ios=641/699, merge=0/0, ticks=13565/62506, in_queue=76072, util=93.33% 00:14:02.662 sdcb: ios=641/718, merge=0/0, ticks=12663/63560, in_queue=76224, util=93.10% 00:14:02.662 sdce: ios=640/640, merge=0/0, ticks=11532/63352, in_queue=74885, util=93.84% 00:14:02.662 sdcf: ios=640/675, merge=0/0, ticks=8834/68427, in_queue=77262, util=94.42% 00:14:02.662 sdch: ios=640/685, merge=0/0, ticks=7856/69875, in_queue=77732, util=94.72% 00:14:02.662 sdcj: ios=480/615, merge=0/0, ticks=9901/67748, in_queue=77650, util=94.87% 00:14:02.662 sdcl: ios=640/689, merge=0/0, ticks=9310/67326, in_queue=76637, util=95.20% 00:14:02.662 sdcm: ios=640/713, merge=0/0, ticks=8481/67685, in_queue=76166, util=95.75% 00:14:02.662 sdco: ios=642/724, merge=0/0, ticks=13472/63734, in_queue=77206, util=96.23% 00:14:02.662 sdcr: ios=480/585, merge=0/0, ticks=9280/68068, in_queue=77349, util=96.22% 00:14:02.662 sdcu: ios=641/705, merge=0/0, ticks=9833/66887, in_queue=76721, util=96.58% 00:14:02.662 sda: ios=641/651, merge=0/0, ticks=12143/64565, in_queue=76708, util=96.24% 00:14:02.662 sdb: ios=641/692, merge=0/0, ticks=10984/65839, in_queue=76824, util=96.97% 00:14:02.662 sdd: ios=642/689, merge=0/0, ticks=10517/66592, in_queue=77110, util=96.91% 00:14:02.662 sde: ios=480/563, merge=0/0, ticks=11422/66504, in_queue=77926, util=97.38% 00:14:02.662 sdi: ios=480/576, merge=0/0, ticks=11224/65652, in_queue=76877, util=97.63% 00:14:02.662 sdm: ios=496/640, merge=0/0, ticks=7546/69926, in_queue=77473, util=97.46% 00:14:02.662 sdp: ios=641/694, merge=0/0, ticks=10679/65444, in_queue=76123, util=97.79% 00:14:02.662 sds: ios=641/643, merge=0/0, ticks=12479/63745, in_queue=76224, util=98.23% 00:14:02.662 sdx: ios=642/718, merge=0/0, ticks=8674/69324, in_queue=77999, util=98.57% 00:14:02.662 sdab: ios=641/668, merge=0/0, ticks=11758/64634, in_queue=76392, util=98.45% 00:14:02.662 [2024-07-25 08:56:09.693292] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.662 [2024-07-25 08:56:09.694745] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.662 [2024-07-25 08:56:09.696539] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.662 [2024-07-25 08:56:09.698264] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.662 [2024-07-25 08:56:09.699780] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.662 [2024-07-25 08:56:09.701321] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.662 [2024-07-25 08:56:09.702916] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.662 [2024-07-25 08:56:09.704439] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.662 [2024-07-25 08:56:09.706068] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.662 [2024-07-25 08:56:09.708414] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.662 [2024-07-25 08:56:09.710001] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.662 08:56:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@78 -- # timing_exit fio 00:14:02.662 08:56:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:02.662 08:56:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:02.662 [2024-07-25 08:56:09.711854] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.662 [2024-07-25 08:56:09.713236] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.662 [2024-07-25 08:56:09.714488] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.662 [2024-07-25 08:56:09.715783] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.662 [2024-07-25 08:56:09.719198] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.662 [2024-07-25 08:56:09.721004] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.663 [2024-07-25 08:56:09.723102] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.663 [2024-07-25 08:56:09.724603] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.663 [2024-07-25 08:56:09.727344] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.663 [2024-07-25 08:56:09.730477] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.663 [2024-07-25 08:56:09.733719] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.663 [2024-07-25 08:56:09.738920] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.663 [2024-07-25 08:56:09.740602] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.663 [2024-07-25 08:56:09.742106] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.663 [2024-07-25 08:56:09.743913] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.663 [2024-07-25 08:56:09.745538] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.663 08:56:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@80 -- # rm -f ./local-job0-0-verify.state 00:14:02.663 [2024-07-25 08:56:09.747227] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.749087] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 08:56:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:14:02.922 08:56:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@83 -- # rm -f 00:14:02.922 [2024-07-25 08:56:09.750673] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 08:56:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@84 -- # iscsicleanup 00:14:02.922 [2024-07-25 08:56:09.752064] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 Cleaning up iSCSI connection 00:14:02.922 08:56:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:14:02.922 08:56:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:14:02.922 [2024-07-25 08:56:09.753516] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.754809] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.756171] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.757615] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.759066] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.760544] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.762100] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.768032] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.769502] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.770995] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.772436] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.774121] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.775473] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.777218] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.778879] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.780576] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.782319] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.787639] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.791077] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.792722] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.794262] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.797016] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.801444] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.808888] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.813062] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.922 [2024-07-25 08:56:09.818551] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.823460] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.826521] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.829647] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.834975] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.837289] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.839757] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.841914] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.843824] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.845851] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.848111] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.850708] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.852814] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.854433] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.856254] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.859916] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.864391] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.869885] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.873252] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.877329] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.879999] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.882378] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.884564] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.887514] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.889967] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.891935] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.893898] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.895795] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.897527] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.899436] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.911467] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.913059] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.914508] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.916155] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.918430] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.920085] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.921833] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.923633] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.926509] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.929570] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.935028] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.938722] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.940966] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.943727] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.953900] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.956911] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.959739] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.923 [2024-07-25 08:56:09.965481] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:03.182 Logging out of session [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:14:03.182 Logging out of session [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:14:03.182 Logging out of session [sid: 12, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:14:03.182 Logging out of session [sid: 13, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:14:03.182 Logging out of session [sid: 14, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:14:03.182 Logging out of session [sid: 15, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:14:03.182 Logging out of session [sid: 16, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:14:03.182 Logging out of session [sid: 17, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:14:03.182 Logging out of session [sid: 18, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:14:03.182 Logging out of session [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:14:03.182 Logout of [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:14:03.182 Logout of [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:14:03.182 Logout of [sid: 12, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:14:03.182 Logout of [sid: 13, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:14:03.182 Logout of [sid: 14, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:14:03.182 Logout of [sid: 15, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:14:03.182 Logout of [sid: 16, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:14:03.182 Logout of [sid: 17, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:14:03.182 Logout of [sid: 18, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:14:03.182 Logout of [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:14:03.182 08:56:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:14:03.182 08:56:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@985 -- # rm -rf 00:14:03.182 08:56:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@85 -- # killprocess 67467 00:14:03.182 08:56:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@950 -- # '[' -z 67467 ']' 00:14:03.182 08:56:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # kill -0 67467 00:14:03.182 08:56:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@955 -- # uname 00:14:03.182 08:56:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:03.182 08:56:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67467 00:14:03.182 08:56:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:03.182 killing process with pid 67467 00:14:03.182 08:56:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:03.182 08:56:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67467' 00:14:03.182 08:56:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@969 -- # kill 67467 00:14:03.182 08:56:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@974 -- # wait 67467 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@86 -- # iscsitestfini 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:14:11.311 00:14:11.311 real 1m6.471s 00:14:11.311 user 4m38.978s 00:14:11.311 sys 0m23.503s 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:11.311 ************************************ 00:14:11.311 END TEST iscsi_tgt_iscsi_lvol 00:14:11.311 ************************************ 00:14:11.311 08:56:17 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@37 -- # run_test iscsi_tgt_fio /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:14:11.311 08:56:17 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:11.311 08:56:17 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.311 08:56:17 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:14:11.311 ************************************ 00:14:11.311 START TEST iscsi_tgt_fio 00:14:11.311 ************************************ 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:14:11.311 * Looking for test storage... 00:14:11.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@11 -- # iscsitestinit 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@48 -- # '[' -z 10.0.0.1 ']' 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@53 -- # '[' -z 10.0.0.2 ']' 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@58 -- # MALLOC_BDEV_SIZE=64 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@59 -- # MALLOC_BLOCK_SIZE=4096 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@60 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@61 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@63 -- # timing_enter start_iscsi_tgt 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@66 -- # pid=72301 00:14:11.311 Process pid: 72301 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@67 -- # echo 'Process pid: 72301' 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@69 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@71 -- # waitforlisten 72301 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@65 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@831 -- # '[' -z 72301 ']' 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:11.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:11.311 08:56:17 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:14:11.311 [2024-07-25 08:56:17.504139] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:11.311 [2024-07-25 08:56:17.504304] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72301 ] 00:14:11.311 [2024-07-25 08:56:17.667656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.311 [2024-07-25 08:56:17.934738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.312 08:56:18 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:11.312 08:56:18 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@864 -- # return 0 00:14:11.312 08:56:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:14:12.685 iscsi_tgt is listening. Running tests... 00:14:12.685 08:56:19 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@75 -- # echo 'iscsi_tgt is listening. Running tests...' 00:14:12.685 08:56:19 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@77 -- # timing_exit start_iscsi_tgt 00:14:12.685 08:56:19 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:12.685 08:56:19 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:14:12.685 08:56:19 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:14:12.685 08:56:19 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:14:12.943 08:56:20 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:14:13.511 08:56:20 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # malloc_bdevs='Malloc0 ' 00:14:13.511 08:56:20 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:14:13.770 08:56:20 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # malloc_bdevs+=Malloc1 00:14:13.770 08:56:20 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:13.770 08:56:20 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 1024 512 00:14:15.726 08:56:22 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # bdev=Malloc2 00:14:15.726 08:56:22 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias 'raid0:0 Malloc2:1' 1:2 64 -d 00:14:15.726 08:56:22 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@91 -- # sleep 1 00:14:16.661 08:56:23 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@93 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:14:16.661 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:14:16.661 08:56:23 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@94 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:14:16.661 [2024-07-25 08:56:23.602471] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:16.661 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:14:16.661 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:14:16.661 [2024-07-25 08:56:23.607469] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:16.661 08:56:23 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@95 -- # waitforiscsidevices 2 00:14:16.661 08:56:23 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@116 -- # local num=2 00:14:16.661 08:56:23 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:14:16.661 08:56:23 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:14:16.661 08:56:23 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:14:16.661 08:56:23 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:14:16.661 08:56:23 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # n=2 00:14:16.661 08:56:23 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@120 -- # '[' 2 -ne 2 ']' 00:14:16.661 08:56:23 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@123 -- # return 0 00:14:16.661 08:56:23 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@97 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; delete_tmp_files; exit 1' SIGINT SIGTERM EXIT 00:14:16.661 08:56:23 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:14:16.661 [global] 00:14:16.661 thread=1 00:14:16.661 invalidate=1 00:14:16.661 rw=randrw 00:14:16.661 time_based=1 00:14:16.661 runtime=1 00:14:16.661 ioengine=libaio 00:14:16.661 direct=1 00:14:16.661 bs=4096 00:14:16.661 iodepth=1 00:14:16.661 norandommap=0 00:14:16.661 numjobs=1 00:14:16.661 00:14:16.661 verify_dump=1 00:14:16.661 verify_backlog=512 00:14:16.661 verify_state_save=0 00:14:16.661 do_verify=1 00:14:16.661 verify=crc32c-intel 00:14:16.661 [job0] 00:14:16.661 filename=/dev/sda 00:14:16.661 [job1] 00:14:16.661 filename=/dev/sdb 00:14:16.661 queue_depth set to 113 (sda) 00:14:16.661 queue_depth set to 113 (sdb) 00:14:16.920 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:16.920 job1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:16.920 fio-3.35 00:14:16.920 Starting 2 threads 00:14:16.920 [2024-07-25 08:56:23.864202] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:16.920 [2024-07-25 08:56:23.868426] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:17.855 [2024-07-25 08:56:24.972263] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:18.114 [2024-07-25 08:56:24.977168] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:18.114 00:14:18.114 job0: (groupid=0, jobs=1): err= 0: pid=72455: Thu Jul 25 08:56:25 2024 00:14:18.114 read: IOPS=6115, BW=23.9MiB/s (25.1MB/s)(23.9MiB/1001msec) 00:14:18.114 slat (nsec): min=2021, max=82078, avg=4809.32, stdev=1862.11 00:14:18.114 clat (usec): min=60, max=451, avg=101.50, stdev=15.19 00:14:18.114 lat (usec): min=64, max=533, avg=106.31, stdev=15.93 00:14:18.114 clat percentiles (usec): 00:14:18.114 | 1.00th=[ 70], 5.00th=[ 84], 10.00th=[ 87], 20.00th=[ 90], 00:14:18.114 | 30.00th=[ 95], 40.00th=[ 97], 50.00th=[ 100], 60.00th=[ 103], 00:14:18.114 | 70.00th=[ 106], 80.00th=[ 113], 90.00th=[ 119], 95.00th=[ 126], 00:14:18.114 | 99.00th=[ 141], 99.50th=[ 153], 99.90th=[ 180], 99.95th=[ 251], 00:14:18.114 | 99.99th=[ 453] 00:14:18.114 bw ( KiB/s): min=11832, max=11832, per=25.01%, avg=11832.00, stdev= 0.00, samples=1 00:14:18.114 iops : min= 2958, max= 2958, avg=2958.00, stdev= 0.00, samples=1 00:14:18.114 write: IOPS=3172, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1001msec); 0 zone resets 00:14:18.114 slat (nsec): min=2717, max=39205, avg=5919.27, stdev=2337.26 00:14:18.114 clat (usec): min=62, max=239, avg=101.53, stdev=18.73 00:14:18.114 lat (usec): min=68, max=255, avg=107.45, stdev=19.42 00:14:18.114 clat percentiles (usec): 00:14:18.114 | 1.00th=[ 69], 5.00th=[ 76], 10.00th=[ 81], 20.00th=[ 86], 00:14:18.114 | 30.00th=[ 92], 40.00th=[ 95], 50.00th=[ 98], 60.00th=[ 103], 00:14:18.114 | 70.00th=[ 109], 80.00th=[ 116], 90.00th=[ 127], 95.00th=[ 137], 00:14:18.114 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 186], 99.95th=[ 188], 00:14:18.114 | 99.99th=[ 241] 00:14:18.114 bw ( KiB/s): min=12368, max=12368, per=49.54%, avg=12368.00, stdev= 0.00, samples=1 00:14:18.114 iops : min= 3092, max= 3092, avg=3092.00, stdev= 0.00, samples=1 00:14:18.114 lat (usec) : 100=52.09%, 250=47.87%, 500=0.04% 00:14:18.114 cpu : usr=2.70%, sys=6.10%, ctx=9298, majf=0, minf=9 00:14:18.114 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:18.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.114 issued rwts: total=6122,3176,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:18.114 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:18.114 job1: (groupid=0, jobs=1): err= 0: pid=72456: Thu Jul 25 08:56:25 2024 00:14:18.114 read: IOPS=5712, BW=22.3MiB/s (23.4MB/s)(22.3MiB/1001msec) 00:14:18.114 slat (nsec): min=1856, max=69342, avg=3268.71, stdev=2063.70 00:14:18.114 clat (usec): min=53, max=3917, avg=105.06, stdev=94.75 00:14:18.114 lat (usec): min=58, max=3922, avg=108.33, stdev=94.94 00:14:18.114 clat percentiles (usec): 00:14:18.114 | 1.00th=[ 73], 5.00th=[ 87], 10.00th=[ 88], 20.00th=[ 91], 00:14:18.115 | 30.00th=[ 97], 40.00th=[ 98], 50.00th=[ 100], 60.00th=[ 102], 00:14:18.115 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 120], 95.00th=[ 126], 00:14:18.115 | 99.00th=[ 147], 99.50th=[ 161], 99.90th=[ 594], 99.95th=[ 3556], 00:14:18.115 | 99.99th=[ 3916] 00:14:18.115 bw ( KiB/s): min=11632, max=11632, per=24.59%, avg=11632.00, stdev= 0.00, samples=1 00:14:18.115 iops : min= 2908, max= 2908, avg=2908.00, stdev= 0.00, samples=1 00:14:18.115 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:14:18.115 slat (nsec): min=2642, max=44816, avg=4520.23, stdev=2664.19 00:14:18.115 clat (usec): min=59, max=3499, avg=117.03, stdev=104.62 00:14:18.115 lat (usec): min=64, max=3506, avg=121.55, stdev=104.76 00:14:18.115 clat percentiles (usec): 00:14:18.115 | 1.00th=[ 72], 5.00th=[ 92], 10.00th=[ 93], 20.00th=[ 97], 00:14:18.115 | 30.00th=[ 103], 40.00th=[ 105], 50.00th=[ 109], 60.00th=[ 114], 00:14:18.115 | 70.00th=[ 119], 80.00th=[ 128], 90.00th=[ 145], 95.00th=[ 161], 00:14:18.115 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 351], 99.95th=[ 3326], 00:14:18.115 | 99.99th=[ 3490] 00:14:18.115 bw ( KiB/s): min=12288, max=12288, per=49.22%, avg=12288.00, stdev= 0.00, samples=1 00:14:18.115 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:18.115 lat (usec) : 100=40.86%, 250=58.99%, 500=0.03%, 750=0.03% 00:14:18.115 lat (msec) : 4=0.08% 00:14:18.115 cpu : usr=2.10%, sys=5.00%, ctx=8790, majf=0, minf=5 00:14:18.115 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:18.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.115 issued rwts: total=5718,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:18.115 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:18.115 00:14:18.115 Run status group 0 (all jobs): 00:14:18.115 READ: bw=46.2MiB/s (48.4MB/s), 22.3MiB/s-23.9MiB/s (23.4MB/s-25.1MB/s), io=46.2MiB (48.5MB), run=1001-1001msec 00:14:18.115 WRITE: bw=24.4MiB/s (25.6MB/s), 12.0MiB/s-12.4MiB/s (12.6MB/s-13.0MB/s), io=24.4MiB (25.6MB), run=1001-1001msec 00:14:18.115 00:14:18.115 Disk stats (read/write): 00:14:18.115 sda: ios=5436/2926, merge=0/0, ticks=551/293, in_queue=845, util=90.84% 00:14:18.115 sdb: ios=5168/2696, merge=0/0, ticks=522/300, in_queue=822, util=89.09% 00:14:18.115 08:56:25 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:14:18.115 [global] 00:14:18.115 thread=1 00:14:18.115 invalidate=1 00:14:18.115 rw=randrw 00:14:18.115 time_based=1 00:14:18.115 runtime=1 00:14:18.115 ioengine=libaio 00:14:18.115 direct=1 00:14:18.115 bs=131072 00:14:18.115 iodepth=32 00:14:18.115 norandommap=0 00:14:18.115 numjobs=1 00:14:18.115 00:14:18.115 verify_dump=1 00:14:18.115 verify_backlog=512 00:14:18.115 verify_state_save=0 00:14:18.115 do_verify=1 00:14:18.115 verify=crc32c-intel 00:14:18.115 [job0] 00:14:18.115 filename=/dev/sda 00:14:18.115 [job1] 00:14:18.115 filename=/dev/sdb 00:14:18.115 queue_depth set to 113 (sda) 00:14:18.115 queue_depth set to 113 (sdb) 00:14:18.374 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:14:18.374 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:14:18.374 fio-3.35 00:14:18.374 Starting 2 threads 00:14:18.374 [2024-07-25 08:56:25.250764] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:18.374 [2024-07-25 08:56:25.254573] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:19.309 [2024-07-25 08:56:26.271333] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:19.309 [2024-07-25 08:56:26.379115] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:19.309 00:14:19.309 job0: (groupid=0, jobs=1): err= 0: pid=72526: Thu Jul 25 08:56:26 2024 00:14:19.309 read: IOPS=1569, BW=196MiB/s (206MB/s)(199MiB/1014msec) 00:14:19.309 slat (usec): min=5, max=117, avg=25.50, stdev=12.65 00:14:19.309 clat (usec): min=1202, max=31379, avg=7585.74, stdev=5282.95 00:14:19.309 lat (usec): min=1232, max=31416, avg=7611.23, stdev=5282.59 00:14:19.309 clat percentiles (usec): 00:14:19.309 | 1.00th=[ 1385], 5.00th=[ 1516], 10.00th=[ 1663], 20.00th=[ 1926], 00:14:19.309 | 30.00th=[ 3916], 40.00th=[ 6587], 50.00th=[ 7832], 60.00th=[ 8848], 00:14:19.309 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[12780], 95.00th=[17695], 00:14:19.309 | 99.00th=[26608], 99.50th=[27395], 99.90th=[31327], 99.95th=[31327], 00:14:19.309 | 99.99th=[31327] 00:14:19.309 bw ( KiB/s): min=83200, max=122356, per=27.73%, avg=102778.00, stdev=27687.47, samples=2 00:14:19.309 iops : min= 650, max= 955, avg=802.50, stdev=215.67, samples=2 00:14:19.309 write: IOPS=897, BW=112MiB/s (118MB/s)(105MiB/935msec); 0 zone resets 00:14:19.309 slat (usec): min=32, max=202, avg=109.16, stdev=35.26 00:14:19.309 clat (usec): min=8541, max=50091, avg=23817.52, stdev=5178.76 00:14:19.309 lat (usec): min=8620, max=50221, avg=23926.68, stdev=5189.74 00:14:19.309 clat percentiles (usec): 00:14:19.309 | 1.00th=[11994], 5.00th=[16319], 10.00th=[18482], 20.00th=[20579], 00:14:19.309 | 30.00th=[21890], 40.00th=[22676], 50.00th=[23200], 60.00th=[23725], 00:14:19.309 | 70.00th=[24773], 80.00th=[26870], 90.00th=[30016], 95.00th=[33162], 00:14:19.309 | 99.00th=[45876], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:14:19.309 | 99.99th=[50070] 00:14:19.309 bw ( KiB/s): min=78336, max=131334, per=47.40%, avg=104835.00, stdev=37475.25, samples=2 00:14:19.309 iops : min= 612, max= 1026, avg=819.00, stdev=292.74, samples=2 00:14:19.309 lat (msec) : 2=14.07%, 4=5.84%, 10=32.26%, 20=15.80%, 50=31.98% 00:14:19.309 lat (msec) : 100=0.04% 00:14:19.309 cpu : usr=11.85%, sys=7.40%, ctx=1702, majf=0, minf=5 00:14:19.309 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=96.2%, >=64=0.0% 00:14:19.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.309 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:14:19.309 issued rwts: total=1591,839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:19.309 latency : target=0, window=0, percentile=100.00%, depth=32 00:14:19.309 job1: (groupid=0, jobs=1): err= 0: pid=72527: Thu Jul 25 08:56:26 2024 00:14:19.309 read: IOPS=1328, BW=166MiB/s (174MB/s)(169MiB/1015msec) 00:14:19.309 slat (usec): min=5, max=1079, avg=20.26, stdev=43.58 00:14:19.309 clat (usec): min=1125, max=31411, avg=7922.62, stdev=5651.52 00:14:19.309 lat (usec): min=1144, max=31433, avg=7942.88, stdev=5653.11 00:14:19.309 clat percentiles (usec): 00:14:19.309 | 1.00th=[ 1336], 5.00th=[ 1500], 10.00th=[ 1614], 20.00th=[ 1860], 00:14:19.309 | 30.00th=[ 3261], 40.00th=[ 5932], 50.00th=[ 8979], 60.00th=[ 9765], 00:14:19.309 | 70.00th=[10421], 80.00th=[11076], 90.00th=[13173], 95.00th=[18220], 00:14:19.309 | 99.00th=[26870], 99.50th=[29492], 99.90th=[31327], 99.95th=[31327], 00:14:19.309 | 99.99th=[31327] 00:14:19.309 bw ( KiB/s): min=94464, max=116758, per=28.49%, avg=105611.00, stdev=15764.24, samples=2 00:14:19.309 iops : min= 738, max= 912, avg=825.00, stdev=123.04, samples=2 00:14:19.309 write: IOPS=901, BW=113MiB/s (118MB/s)(114MiB/1015msec); 0 zone resets 00:14:19.310 slat (usec): min=31, max=224, avg=87.90, stdev=40.30 00:14:19.310 clat (usec): min=2359, max=42785, avg=23583.61, stdev=5488.23 00:14:19.310 lat (usec): min=2486, max=42889, avg=23671.51, stdev=5492.08 00:14:19.310 clat percentiles (usec): 00:14:19.310 | 1.00th=[ 2737], 5.00th=[14484], 10.00th=[16909], 20.00th=[20579], 00:14:19.310 | 30.00th=[21890], 40.00th=[22676], 50.00th=[23200], 60.00th=[23987], 00:14:19.310 | 70.00th=[25560], 80.00th=[27132], 90.00th=[30016], 95.00th=[33162], 00:14:19.310 | 99.00th=[38536], 99.50th=[40633], 99.90th=[42730], 99.95th=[42730], 00:14:19.310 | 99.99th=[42730] 00:14:19.310 bw ( KiB/s): min=97792, max=130810, per=51.67%, avg=114301.00, stdev=23347.25, samples=2 00:14:19.310 iops : min= 764, max= 1021, avg=892.50, stdev=181.73, samples=2 00:14:19.310 lat (msec) : 2=13.96%, 4=6.41%, 10=18.47%, 20=25.01%, 50=36.15% 00:14:19.310 cpu : usr=8.48%, sys=4.14%, ctx=2047, majf=0, minf=5 00:14:19.310 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=98.6%, >=64=0.0% 00:14:19.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:14:19.310 issued rwts: total=1348,915,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:19.310 latency : target=0, window=0, percentile=100.00%, depth=32 00:14:19.310 00:14:19.310 Run status group 0 (all jobs): 00:14:19.310 READ: bw=362MiB/s (380MB/s), 166MiB/s-196MiB/s (174MB/s-206MB/s), io=367MiB (385MB), run=1014-1015msec 00:14:19.310 WRITE: bw=216MiB/s (227MB/s), 112MiB/s-113MiB/s (118MB/s-118MB/s), io=219MiB (230MB), run=935-1015msec 00:14:19.310 00:14:19.310 Disk stats (read/write): 00:14:19.310 sda: ios=1318/778, merge=0/0, ticks=9043/18244, in_queue=27287, util=89.53% 00:14:19.310 sdb: ios=1294/780, merge=0/0, ticks=9628/18046, in_queue=27674, util=90.22% 00:14:19.569 08:56:26 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@101 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 524288 -d 128 -t randrw -r 1 -v 00:14:19.569 [global] 00:14:19.569 thread=1 00:14:19.569 invalidate=1 00:14:19.569 rw=randrw 00:14:19.569 time_based=1 00:14:19.569 runtime=1 00:14:19.569 ioengine=libaio 00:14:19.569 direct=1 00:14:19.569 bs=524288 00:14:19.569 iodepth=128 00:14:19.569 norandommap=0 00:14:19.569 numjobs=1 00:14:19.569 00:14:19.569 verify_dump=1 00:14:19.569 verify_backlog=512 00:14:19.569 verify_state_save=0 00:14:19.569 do_verify=1 00:14:19.569 verify=crc32c-intel 00:14:19.569 [job0] 00:14:19.569 filename=/dev/sda 00:14:19.569 [job1] 00:14:19.569 filename=/dev/sdb 00:14:19.569 queue_depth set to 113 (sda) 00:14:19.569 queue_depth set to 113 (sdb) 00:14:19.569 job0: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:14:19.569 job1: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:14:19.569 fio-3.35 00:14:19.569 Starting 2 threads 00:14:19.569 [2024-07-25 08:56:26.632557] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:19.569 [2024-07-25 08:56:26.636724] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:20.947 [2024-07-25 08:56:27.850633] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:20.947 [2024-07-25 08:56:27.855562] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:20.947 00:14:20.947 job0: (groupid=0, jobs=1): err= 0: pid=72591: Thu Jul 25 08:56:27 2024 00:14:20.947 read: IOPS=238, BW=119MiB/s (125MB/s)(121MiB/1013msec) 00:14:20.947 slat (usec): min=14, max=215299, avg=2312.94, stdev=14082.44 00:14:20.947 clat (msec): min=74, max=442, avg=224.79, stdev=115.70 00:14:20.947 lat (msec): min=74, max=442, avg=227.11, stdev=116.69 00:14:20.947 clat percentiles (msec): 00:14:20.947 | 1.00th=[ 79], 5.00th=[ 92], 10.00th=[ 104], 20.00th=[ 138], 00:14:20.947 | 30.00th=[ 155], 40.00th=[ 169], 50.00th=[ 188], 60.00th=[ 197], 00:14:20.947 | 70.00th=[ 236], 80.00th=[ 409], 90.00th=[ 422], 95.00th=[ 435], 00:14:20.947 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 443], 99.95th=[ 443], 00:14:20.947 | 99.99th=[ 443] 00:14:20.947 bw ( KiB/s): min=57344, max=139264, per=47.02%, avg=98304.00, stdev=57926.19, samples=2 00:14:20.947 iops : min= 112, max= 272, avg=192.00, stdev=113.14, samples=2 00:14:20.947 write: IOPS=266, BW=133MiB/s (140MB/s)(135MiB/1013msec); 0 zone resets 00:14:20.947 slat (usec): min=123, max=18276, avg=1286.17, stdev=2634.02 00:14:20.947 clat (msec): min=96, max=480, avg=263.17, stdev=128.56 00:14:20.947 lat (msec): min=97, max=481, avg=264.46, stdev=128.86 00:14:20.947 clat percentiles (msec): 00:14:20.947 | 1.00th=[ 104], 5.00th=[ 107], 10.00th=[ 117], 20.00th=[ 148], 00:14:20.947 | 30.00th=[ 180], 40.00th=[ 190], 50.00th=[ 215], 60.00th=[ 247], 00:14:20.947 | 70.00th=[ 317], 80.00th=[ 443], 90.00th=[ 460], 95.00th=[ 468], 00:14:20.947 | 99.00th=[ 477], 99.50th=[ 477], 99.90th=[ 481], 99.95th=[ 481], 00:14:20.947 | 99.99th=[ 481] 00:14:20.947 bw ( KiB/s): min=75776, max=121856, per=42.02%, avg=98816.00, stdev=32583.48, samples=2 00:14:20.947 iops : min= 148, max= 238, avg=193.00, stdev=63.64, samples=2 00:14:20.947 lat (msec) : 100=4.88%, 250=60.94%, 500=34.18% 00:14:20.947 cpu : usr=7.61%, sys=2.37%, ctx=194, majf=0, minf=5 00:14:20.947 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.7% 00:14:20.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.947 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:14:20.947 issued rwts: total=242,270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:20.947 job1: (groupid=0, jobs=1): err= 0: pid=72592: Thu Jul 25 08:56:27 2024 00:14:20.947 read: IOPS=179, BW=89.8MiB/s (94.2MB/s)(95.0MiB/1058msec) 00:14:20.947 slat (usec): min=13, max=42465, avg=1749.77, stdev=4415.48 00:14:20.947 clat (msec): min=57, max=464, avg=284.00, stdev=117.09 00:14:20.947 lat (msec): min=67, max=464, avg=285.75, stdev=117.17 00:14:20.947 clat percentiles (msec): 00:14:20.947 | 1.00th=[ 68], 5.00th=[ 89], 10.00th=[ 103], 20.00th=[ 176], 00:14:20.947 | 30.00th=[ 213], 40.00th=[ 249], 50.00th=[ 284], 60.00th=[ 317], 00:14:20.947 | 70.00th=[ 384], 80.00th=[ 418], 90.00th=[ 430], 95.00th=[ 443], 00:14:20.947 | 99.00th=[ 464], 99.50th=[ 464], 99.90th=[ 464], 99.95th=[ 464], 00:14:20.947 | 99.99th=[ 464] 00:14:20.947 bw ( KiB/s): min=49152, max=88064, per=32.82%, avg=68608.00, stdev=27514.94, samples=2 00:14:20.947 iops : min= 96, max= 172, avg=134.00, stdev=53.74, samples=2 00:14:20.947 write: IOPS=204, BW=102MiB/s (107MB/s)(108MiB/1058msec); 0 zone resets 00:14:20.947 slat (usec): min=109, max=213070, avg=3084.95, stdev=14749.60 00:14:20.947 clat (msec): min=84, max=520, avg=320.92, stdev=119.29 00:14:20.947 lat (msec): min=84, max=525, avg=324.01, stdev=119.84 00:14:20.947 clat percentiles (msec): 00:14:20.947 | 1.00th=[ 86], 5.00th=[ 109], 10.00th=[ 148], 20.00th=[ 230], 00:14:20.947 | 30.00th=[ 255], 40.00th=[ 284], 50.00th=[ 317], 60.00th=[ 338], 00:14:20.947 | 70.00th=[ 435], 80.00th=[ 460], 90.00th=[ 472], 95.00th=[ 485], 00:14:20.947 | 99.00th=[ 502], 99.50th=[ 518], 99.90th=[ 523], 99.95th=[ 523], 00:14:20.947 | 99.99th=[ 523] 00:14:20.947 bw ( KiB/s): min=51200, max=97280, per=31.57%, avg=74240.00, stdev=32583.48, samples=2 00:14:20.947 iops : min= 100, max= 190, avg=145.00, stdev=63.64, samples=2 00:14:20.947 lat (msec) : 100=6.40%, 250=27.09%, 500=65.02%, 750=1.48% 00:14:20.947 cpu : usr=5.20%, sys=1.89%, ctx=324, majf=0, minf=7 00:14:20.947 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=3.9%, 32=7.9%, >=64=84.5% 00:14:20.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.947 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:14:20.947 issued rwts: total=190,216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:20.947 00:14:20.947 Run status group 0 (all jobs): 00:14:20.947 READ: bw=204MiB/s (214MB/s), 89.8MiB/s-119MiB/s (94.2MB/s-125MB/s), io=216MiB (226MB), run=1013-1058msec 00:14:20.947 WRITE: bw=230MiB/s (241MB/s), 102MiB/s-133MiB/s (107MB/s-140MB/s), io=243MiB (255MB), run=1013-1058msec 00:14:20.947 00:14:20.947 Disk stats (read/write): 00:14:20.947 sda: ios=257/221, merge=0/0, ticks=18248/25343, in_queue=43590, util=75.33% 00:14:20.947 sdb: ios=167/145, merge=0/0, ticks=17443/25239, in_queue=42682, util=79.03% 00:14:20.947 08:56:27 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 1024 -t read -r 1 -n 4 00:14:20.947 [global] 00:14:20.947 thread=1 00:14:20.947 invalidate=1 00:14:20.947 rw=read 00:14:20.947 time_based=1 00:14:20.947 runtime=1 00:14:20.947 ioengine=libaio 00:14:20.947 direct=1 00:14:20.947 bs=1048576 00:14:20.947 iodepth=1024 00:14:20.947 norandommap=1 00:14:20.947 numjobs=4 00:14:20.947 00:14:20.947 [job0] 00:14:20.947 filename=/dev/sda 00:14:20.947 [job1] 00:14:20.947 filename=/dev/sdb 00:14:20.947 queue_depth set to 113 (sda) 00:14:20.947 queue_depth set to 113 (sdb) 00:14:21.205 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:14:21.205 ... 00:14:21.205 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:14:21.205 ... 00:14:21.205 fio-3.35 00:14:21.205 Starting 8 threads 00:14:26.475 00:14:26.475 job0: (groupid=0, jobs=1): err= 0: pid=72659: Thu Jul 25 08:56:33 2024 00:14:26.475 read: IOPS=4, BW=5062KiB/s (5183kB/s)(26.0MiB/5260msec) 00:14:26.475 slat (usec): min=470, max=1237.2k, avg=57065.35, stdev=244153.06 00:14:26.475 clat (msec): min=3776, max=5257, avg=5171.42, stdev=291.30 00:14:26.475 lat (msec): min=5013, max=5259, avg=5228.48, stdev=62.54 00:14:26.475 clat percentiles (msec): 00:14:26.475 | 1.00th=[ 3775], 5.00th=[ 5000], 10.00th=[ 5000], 20.00th=[ 5201], 00:14:26.475 | 30.00th=[ 5269], 40.00th=[ 5269], 50.00th=[ 5269], 60.00th=[ 5269], 00:14:26.475 | 70.00th=[ 5269], 80.00th=[ 5269], 90.00th=[ 5269], 95.00th=[ 5269], 00:14:26.475 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:14:26.475 | 99.99th=[ 5269] 00:14:26.475 lat (msec) : >=2000=100.00% 00:14:26.475 cpu : usr=0.00%, sys=0.30%, ctx=29, majf=0, minf=6657 00:14:26.475 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:14:26.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.475 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:26.475 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.475 latency : target=0, window=0, percentile=100.00%, depth=1024 00:14:26.475 job0: (groupid=0, jobs=1): err= 0: pid=72660: Thu Jul 25 08:56:33 2024 00:14:26.475 read: IOPS=4, BW=4666KiB/s (4778kB/s)(24.0MiB/5267msec) 00:14:26.475 slat (usec): min=380, max=1237.1k, avg=61310.40, stdev=254062.34 00:14:26.475 clat (msec): min=3794, max=5265, avg=5179.26, stdev=301.51 00:14:26.475 lat (msec): min=5031, max=5266, avg=5240.57, stdev=63.10 00:14:26.475 clat percentiles (msec): 00:14:26.475 | 1.00th=[ 3809], 5.00th=[ 5000], 10.00th=[ 5067], 20.00th=[ 5269], 00:14:26.475 | 30.00th=[ 5269], 40.00th=[ 5269], 50.00th=[ 5269], 60.00th=[ 5269], 00:14:26.475 | 70.00th=[ 5269], 80.00th=[ 5269], 90.00th=[ 5269], 95.00th=[ 5269], 00:14:26.475 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:14:26.475 | 99.99th=[ 5269] 00:14:26.475 lat (msec) : >=2000=100.00% 00:14:26.475 cpu : usr=0.00%, sys=0.19%, ctx=32, majf=0, minf=6145 00:14:26.475 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:14:26.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.475 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:26.475 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.475 latency : target=0, window=0, percentile=100.00%, depth=1024 00:14:26.475 job0: (groupid=0, jobs=1): err= 0: pid=72661: Thu Jul 25 08:56:33 2024 00:14:26.475 read: IOPS=2, BW=2921KiB/s (2991kB/s)(15.0MiB/5258msec) 00:14:26.475 slat (usec): min=789, max=1856.7k, avg=139888.63, stdev=477962.11 00:14:26.475 clat (msec): min=3159, max=5255, avg=5047.26, stdev=532.25 00:14:26.475 lat (msec): min=5015, max=5257, avg=5187.15, stdev=104.01 00:14:26.475 clat percentiles (msec): 00:14:26.475 | 1.00th=[ 3171], 5.00th=[ 3171], 10.00th=[ 5000], 20.00th=[ 5000], 00:14:26.475 | 30.00th=[ 5000], 40.00th=[ 5269], 50.00th=[ 5269], 60.00th=[ 5269], 00:14:26.475 | 70.00th=[ 5269], 80.00th=[ 5269], 90.00th=[ 5269], 95.00th=[ 5269], 00:14:26.475 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:14:26.475 | 99.99th=[ 5269] 00:14:26.475 lat (msec) : >=2000=100.00% 00:14:26.475 cpu : usr=0.00%, sys=0.29%, ctx=21, majf=0, minf=3841 00:14:26.475 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:26.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.475 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.475 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.475 latency : target=0, window=0, percentile=100.00%, depth=1024 00:14:26.475 job0: (groupid=0, jobs=1): err= 0: pid=72662: Thu Jul 25 08:56:33 2024 00:14:26.475 read: IOPS=3, BW=3698KiB/s (3787kB/s)(19.0MiB/5261msec) 00:14:26.475 slat (usec): min=503, max=1237.2k, avg=78175.41, stdev=284695.18 00:14:26.475 clat (msec): min=3775, max=5258, avg=5143.52, stdev=339.13 00:14:26.475 lat (msec): min=5012, max=5260, avg=5221.70, stdev=72.74 00:14:26.475 clat percentiles (msec): 00:14:26.475 | 1.00th=[ 3775], 5.00th=[ 3775], 10.00th=[ 5000], 20.00th=[ 5201], 00:14:26.475 | 30.00th=[ 5201], 40.00th=[ 5269], 50.00th=[ 5269], 60.00th=[ 5269], 00:14:26.475 | 70.00th=[ 5269], 80.00th=[ 5269], 90.00th=[ 5269], 95.00th=[ 5269], 00:14:26.475 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:14:26.475 | 99.99th=[ 5269] 00:14:26.475 lat (msec) : >=2000=100.00% 00:14:26.475 cpu : usr=0.00%, sys=0.29%, ctx=53, majf=0, minf=4865 00:14:26.475 IO depths : 1=5.3%, 2=10.5%, 4=21.1%, 8=42.1%, 16=21.1%, 32=0.0%, >=64=0.0% 00:14:26.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.475 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:26.475 issued rwts: total=19,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.475 latency : target=0, window=0, percentile=100.00%, depth=1024 00:14:26.475 job1: (groupid=0, jobs=1): err= 0: pid=72663: Thu Jul 25 08:56:33 2024 00:14:26.475 read: IOPS=3, BW=3099KiB/s (3173kB/s)(16.0MiB/5287msec) 00:14:26.476 slat (usec): min=868, max=1246.2k, avg=92231.50, stdev=311890.10 00:14:26.476 clat (msec): min=3810, max=5283, avg=5166.66, stdev=365.51 00:14:26.476 lat (msec): min=5057, max=5286, avg=5258.89, stdev=54.34 00:14:26.476 clat percentiles (msec): 00:14:26.476 | 1.00th=[ 3809], 5.00th=[ 3809], 10.00th=[ 5067], 20.00th=[ 5269], 00:14:26.476 | 30.00th=[ 5269], 40.00th=[ 5269], 50.00th=[ 5269], 60.00th=[ 5269], 00:14:26.476 | 70.00th=[ 5269], 80.00th=[ 5269], 90.00th=[ 5269], 95.00th=[ 5269], 00:14:26.476 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:14:26.476 | 99.99th=[ 5269] 00:14:26.476 lat (msec) : >=2000=100.00% 00:14:26.476 cpu : usr=0.00%, sys=0.26%, ctx=25, majf=0, minf=4097 00:14:26.476 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:14:26.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.476 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.476 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.476 latency : target=0, window=0, percentile=100.00%, depth=1024 00:14:26.476 job1: (groupid=0, jobs=1): err= 0: pid=72664: Thu Jul 25 08:56:33 2024 00:14:26.476 read: IOPS=1, BW=1169KiB/s (1197kB/s)(6144KiB/5257msec) 00:14:26.476 slat (usec): min=1044, max=1451.9k, avg=243001.62, stdev=592221.56 00:14:26.476 clat (msec): min=3798, max=5255, avg=5010.51, stdev=593.72 00:14:26.476 lat (msec): min=5250, max=5256, avg=5253.52, stdev= 2.27 00:14:26.476 clat percentiles (msec): 00:14:26.476 | 1.00th=[ 3809], 5.00th=[ 3809], 10.00th=[ 3809], 20.00th=[ 5269], 00:14:26.476 | 30.00th=[ 5269], 40.00th=[ 5269], 50.00th=[ 5269], 60.00th=[ 5269], 00:14:26.476 | 70.00th=[ 5269], 80.00th=[ 5269], 90.00th=[ 5269], 95.00th=[ 5269], 00:14:26.476 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:14:26.476 | 99.99th=[ 5269] 00:14:26.476 lat (msec) : >=2000=100.00% 00:14:26.476 cpu : usr=0.00%, sys=0.13%, ctx=15, majf=0, minf=1537 00:14:26.476 IO depths : 1=16.7%, 2=33.3%, 4=50.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:26.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.476 complete : 0=0.0%, 4=0.0%, 8=100.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.476 issued rwts: total=6,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.476 latency : target=0, window=0, percentile=100.00%, depth=1024 00:14:26.476 job1: (groupid=0, jobs=1): err= 0: pid=72665: Thu Jul 25 08:56:33 2024 00:14:26.476 read: IOPS=5, BW=5987KiB/s (6131kB/s)(31.0MiB/5302msec) 00:14:26.476 slat (usec): min=680, max=1864.3k, avg=68289.91, stdev=335526.03 00:14:26.476 clat (msec): min=3184, max=5300, avg=5206.08, stdev=377.71 00:14:26.476 lat (msec): min=5048, max=5301, avg=5274.37, stdev=43.64 00:14:26.476 clat percentiles (msec): 00:14:26.476 | 1.00th=[ 3171], 5.00th=[ 5067], 10.00th=[ 5269], 20.00th=[ 5269], 00:14:26.476 | 30.00th=[ 5269], 40.00th=[ 5269], 50.00th=[ 5269], 60.00th=[ 5269], 00:14:26.476 | 70.00th=[ 5269], 80.00th=[ 5269], 90.00th=[ 5269], 95.00th=[ 5269], 00:14:26.476 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:14:26.476 | 99.99th=[ 5269] 00:14:26.476 lat (msec) : >=2000=100.00% 00:14:26.476 cpu : usr=0.00%, sys=0.49%, ctx=35, majf=0, minf=7937 00:14:26.476 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:14:26.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.476 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:26.476 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.476 latency : target=0, window=0, percentile=100.00%, depth=1024 00:14:26.476 job1: (groupid=0, jobs=1): err= 0: pid=72666: Thu Jul 25 08:56:33 2024 00:14:26.476 read: IOPS=4, BW=4456KiB/s (4563kB/s)(23.0MiB/5285msec) 00:14:26.476 slat (usec): min=402, max=1857.0k, avg=90750.30, stdev=387596.11 00:14:26.476 clat (msec): min=3197, max=5283, avg=5175.66, stdev=433.67 00:14:26.476 lat (msec): min=5054, max=5284, avg=5266.41, stdev=46.46 00:14:26.476 clat percentiles (msec): 00:14:26.476 | 1.00th=[ 3205], 5.00th=[ 5067], 10.00th=[ 5269], 20.00th=[ 5269], 00:14:26.476 | 30.00th=[ 5269], 40.00th=[ 5269], 50.00th=[ 5269], 60.00th=[ 5269], 00:14:26.476 | 70.00th=[ 5269], 80.00th=[ 5269], 90.00th=[ 5269], 95.00th=[ 5269], 00:14:26.476 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:14:26.476 | 99.99th=[ 5269] 00:14:26.476 lat (msec) : >=2000=100.00% 00:14:26.476 cpu : usr=0.00%, sys=0.19%, ctx=43, majf=0, minf=5889 00:14:26.476 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:14:26.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.476 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:26.476 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.476 latency : target=0, window=0, percentile=100.00%, depth=1024 00:14:26.476 00:14:26.476 Run status group 0 (all jobs): 00:14:26.476 READ: bw=30.2MiB/s (31.6MB/s), 1169KiB/s-5987KiB/s (1197kB/s-6131kB/s), io=160MiB (168MB), run=5257-5302msec 00:14:26.476 00:14:26.476 Disk stats (read/write): 00:14:26.476 sda: ios=56/0, merge=0/0, ticks=94910/0, in_queue=94910, util=98.01% 00:14:26.476 sdb: ios=35/0, merge=0/0, ticks=68700/0, in_queue=68700, util=96.22% 00:14:26.476 08:56:33 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@104 -- # '[' 1 -eq 1 ']' 00:14:26.476 08:56:33 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t write -r 300 -v 00:14:26.735 [global] 00:14:26.735 thread=1 00:14:26.735 invalidate=1 00:14:26.735 rw=write 00:14:26.735 time_based=1 00:14:26.735 runtime=300 00:14:26.735 ioengine=libaio 00:14:26.735 direct=1 00:14:26.735 bs=4096 00:14:26.735 iodepth=1 00:14:26.735 norandommap=0 00:14:26.735 numjobs=1 00:14:26.735 00:14:26.735 verify_dump=1 00:14:26.735 verify_backlog=512 00:14:26.735 verify_state_save=0 00:14:26.735 do_verify=1 00:14:26.735 verify=crc32c-intel 00:14:26.735 [job0] 00:14:26.735 filename=/dev/sda 00:14:26.735 [job1] 00:14:26.735 filename=/dev/sdb 00:14:26.735 queue_depth set to 113 (sda) 00:14:26.735 queue_depth set to 113 (sdb) 00:14:26.735 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:26.735 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:26.735 fio-3.35 00:14:26.735 Starting 2 threads 00:14:26.735 [2024-07-25 08:56:33.776919] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:26.735 [2024-07-25 08:56:33.782494] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:34.890 [2024-07-25 08:56:41.803565] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:44.880 [2024-07-25 08:56:50.428080] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:52.997 [2024-07-25 08:56:58.987452] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:01.119 [2024-07-25 08:57:07.446428] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:07.686 [2024-07-25 08:57:14.696496] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:15.813 [2024-07-25 08:57:22.330100] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:25.777 [2024-07-25 08:57:31.852692] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:33.885 [2024-07-25 08:57:40.187581] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:33.885 [2024-07-25 08:57:40.329464] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:41.991 [2024-07-25 08:57:48.600872] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:50.107 [2024-07-25 08:57:56.974293] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:58.228 [2024-07-25 08:58:04.550083] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:06.348 [2024-07-25 08:58:12.614393] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:14.496 [2024-07-25 08:58:20.294204] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:22.620 [2024-07-25 08:58:28.437790] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:30.787 [2024-07-25 08:58:36.593066] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:38.883 [2024-07-25 08:58:44.809105] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:38.883 [2024-07-25 08:58:45.271827] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:46.990 [2024-07-25 08:58:52.946974] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.155 [2024-07-25 08:59:00.845892] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:01.724 [2024-07-25 08:59:08.551292] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:09.837 [2024-07-25 08:59:16.507654] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:17.953 [2024-07-25 08:59:23.871938] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:26.157 [2024-07-25 08:59:32.035179] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:34.284 [2024-07-25 08:59:40.527544] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:42.404 [2024-07-25 08:59:48.223992] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:42.404 [2024-07-25 08:59:48.582940] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:50.517 [2024-07-25 08:59:56.372772] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:58.631 [2024-07-25 09:00:05.286423] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:06.757 [2024-07-25 09:00:13.855147] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:16.743 [2024-07-25 09:00:22.481504] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:24.893 [2024-07-25 09:00:31.056514] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:33.040 [2024-07-25 09:00:38.729775] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:39.627 [2024-07-25 09:00:46.668864] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:51.944 [2024-07-25 09:00:56.885911] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:51.944 [2024-07-25 09:00:58.124243] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:00.048 [2024-07-25 09:01:06.205903] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:06.621 [2024-07-25 09:01:13.348160] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:14.765 [2024-07-25 09:01:21.160525] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:22.886 [2024-07-25 09:01:28.683761] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:27.082 [2024-07-25 09:01:33.884249] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:27.082 [2024-07-25 09:01:33.888949] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:27.082 00:19:27.082 job0: (groupid=0, jobs=1): err= 0: pid=72756: Thu Jul 25 09:01:33 2024 00:19:27.082 read: IOPS=4005, BW=15.6MiB/s (16.4MB/s)(4694MiB/299997msec) 00:19:27.082 slat (nsec): min=1572, max=686470, avg=4899.03, stdev=2321.22 00:19:27.082 clat (nsec): min=812, max=3878.7k, avg=117231.27, stdev=21950.11 00:19:27.082 lat (usec): min=61, max=3882, avg=122.13, stdev=22.32 00:19:27.082 clat percentiles (usec): 00:19:27.082 | 1.00th=[ 82], 5.00th=[ 89], 10.00th=[ 95], 20.00th=[ 101], 00:19:27.082 | 30.00th=[ 106], 40.00th=[ 112], 50.00th=[ 116], 60.00th=[ 121], 00:19:27.082 | 70.00th=[ 126], 80.00th=[ 131], 90.00th=[ 141], 95.00th=[ 151], 00:19:27.082 | 99.00th=[ 176], 99.50th=[ 188], 99.90th=[ 239], 99.95th=[ 322], 00:19:27.082 | 99.99th=[ 498] 00:19:27.082 write: IOPS=4007, BW=15.7MiB/s (16.4MB/s)(4696MiB/299997msec); 0 zone resets 00:19:27.082 slat (usec): min=2, max=1491, avg= 6.50, stdev= 3.58 00:19:27.082 clat (nsec): min=417, max=3542.7k, avg=118837.57, stdev=27535.86 00:19:27.082 lat (usec): min=61, max=3572, avg=125.34, stdev=27.97 00:19:27.082 clat percentiles (usec): 00:19:27.082 | 1.00th=[ 69], 5.00th=[ 79], 10.00th=[ 86], 20.00th=[ 100], 00:19:27.082 | 30.00th=[ 109], 40.00th=[ 114], 50.00th=[ 119], 60.00th=[ 123], 00:19:27.082 | 70.00th=[ 129], 80.00th=[ 137], 90.00th=[ 149], 95.00th=[ 159], 00:19:27.082 | 99.00th=[ 188], 99.50th=[ 202], 99.90th=[ 260], 99.95th=[ 338], 00:19:27.082 | 99.99th=[ 586] 00:19:27.082 bw ( KiB/s): min=12032, max=20728, per=50.17%, avg=16044.19, stdev=1679.10, samples=599 00:19:27.082 iops : min= 3008, max= 5182, avg=4010.98, stdev=419.76, samples=599 00:19:27.082 lat (nsec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:19:27.082 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.04% 00:19:27.082 lat (usec) : 100=18.80%, 250=81.05%, 500=0.09%, 750=0.01%, 1000=0.01% 00:19:27.082 lat (msec) : 2=0.01%, 4=0.01% 00:19:27.082 cpu : usr=2.43%, sys=5.42%, ctx=2428718, majf=0, minf=1 00:19:27.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.082 issued rwts: total=1201664,1202125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:27.082 job1: (groupid=0, jobs=1): err= 0: pid=72757: Thu Jul 25 09:01:33 2024 00:19:27.082 read: IOPS=3988, BW=15.6MiB/s (16.3MB/s)(4674MiB/300000msec) 00:19:27.082 slat (nsec): min=1412, max=1590.8k, avg=4331.13, stdev=2709.28 00:19:27.082 clat (nsec): min=814, max=3599.2k, avg=115005.01, stdev=21863.25 00:19:27.082 lat (usec): min=48, max=3607, avg=119.34, stdev=22.38 00:19:27.082 clat percentiles (usec): 00:19:27.082 | 1.00th=[ 82], 5.00th=[ 88], 10.00th=[ 95], 20.00th=[ 100], 00:19:27.082 | 30.00th=[ 104], 40.00th=[ 109], 50.00th=[ 114], 60.00th=[ 118], 00:19:27.082 | 70.00th=[ 122], 80.00th=[ 128], 90.00th=[ 137], 95.00th=[ 149], 00:19:27.082 | 99.00th=[ 178], 99.50th=[ 194], 99.90th=[ 247], 99.95th=[ 314], 00:19:27.082 | 99.99th=[ 506] 00:19:27.082 write: IOPS=3988, BW=15.6MiB/s (16.3MB/s)(4674MiB/300000msec); 0 zone resets 00:19:27.082 slat (usec): min=2, max=767, avg= 6.21, stdev= 3.04 00:19:27.082 clat (nsec): min=462, max=3813.5k, avg=123027.76, stdev=32156.27 00:19:27.082 lat (usec): min=57, max=3829, avg=129.24, stdev=32.56 00:19:27.082 clat percentiles (usec): 00:19:27.082 | 1.00th=[ 65], 5.00th=[ 74], 10.00th=[ 88], 20.00th=[ 103], 00:19:27.082 | 30.00th=[ 111], 40.00th=[ 115], 50.00th=[ 121], 60.00th=[ 126], 00:19:27.082 | 70.00th=[ 133], 80.00th=[ 143], 90.00th=[ 163], 95.00th=[ 176], 00:19:27.082 | 99.00th=[ 206], 99.50th=[ 225], 99.90th=[ 273], 99.95th=[ 347], 00:19:27.082 | 99.99th=[ 562] 00:19:27.082 bw ( KiB/s): min=11008, max=20688, per=49.94%, avg=15971.78, stdev=1734.49, samples=599 00:19:27.082 iops : min= 2752, max= 5172, avg=3992.87, stdev=433.61, samples=599 00:19:27.082 lat (nsec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:19:27.082 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.07% 00:19:27.082 lat (usec) : 100=18.51%, 250=81.26%, 500=0.13%, 750=0.01%, 1000=0.01% 00:19:27.082 lat (msec) : 2=0.01%, 4=0.01% 00:19:27.082 cpu : usr=2.38%, sys=5.22%, ctx=2418376, majf=0, minf=2 00:19:27.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.082 issued rwts: total=1196506,1196544,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:27.082 00:19:27.082 Run status group 0 (all jobs): 00:19:27.082 READ: bw=31.2MiB/s (32.7MB/s), 15.6MiB/s-15.6MiB/s (16.3MB/s-16.4MB/s), io=9368MiB (9823MB), run=299997-300000msec 00:19:27.082 WRITE: bw=31.2MiB/s (32.7MB/s), 15.6MiB/s-15.7MiB/s (16.3MB/s-16.4MB/s), io=9370MiB (9825MB), run=299997-300000msec 00:19:27.082 00:19:27.082 Disk stats (read/write): 00:19:27.082 sda: ios=1203174/1201664, merge=0/0, ticks=137745/141347, in_queue=279092, util=100.00% 00:19:27.083 sdb: ios=1196273/1196291, merge=0/0, ticks=133036/144981, in_queue=278017, util=100.00% 00:19:27.083 09:01:33 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@116 -- # fio_pid=76126 00:19:27.083 09:01:33 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 128 -t rw -r 10 00:19:27.083 09:01:33 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@118 -- # sleep 3 00:19:27.083 [global] 00:19:27.083 thread=1 00:19:27.083 invalidate=1 00:19:27.083 rw=rw 00:19:27.083 time_based=1 00:19:27.083 runtime=10 00:19:27.083 ioengine=libaio 00:19:27.083 direct=1 00:19:27.083 bs=1048576 00:19:27.083 iodepth=128 00:19:27.083 norandommap=1 00:19:27.083 numjobs=1 00:19:27.083 00:19:27.083 [job0] 00:19:27.083 filename=/dev/sda 00:19:27.083 [job1] 00:19:27.083 filename=/dev/sdb 00:19:27.083 queue_depth set to 113 (sda) 00:19:27.083 queue_depth set to 113 (sdb) 00:19:27.083 job0: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:27.083 job1: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:27.083 fio-3.35 00:19:27.083 Starting 2 threads 00:19:27.083 [2024-07-25 09:01:34.150631] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:27.083 [2024-07-25 09:01:34.155354] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:30.367 09:01:36 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:30.367 [2024-07-25 09:01:37.136371] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (raid0) received event(SPDK_BDEV_EVENT_REMOVE) 00:19:30.367 [2024-07-25 09:01:37.137474] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c27 00:19:30.367 [2024-07-25 09:01:37.140658] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c27 00:19:30.367 [2024-07-25 09:01:37.142910] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c27 00:19:30.367 [2024-07-25 09:01:37.147240] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c29 00:19:30.367 [2024-07-25 09:01:37.148349] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c29 00:19:30.367 [2024-07-25 09:01:37.150724] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c29 00:19:30.367 [2024-07-25 09:01:37.153023] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c29 00:19:30.367 [2024-07-25 09:01:37.154910] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c29 00:19:30.367 [2024-07-25 09:01:37.156845] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c29 00:19:30.367 [2024-07-25 09:01:37.156937] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c29 00:19:30.367 [2024-07-25 09:01:37.157006] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c29 00:19:30.367 [2024-07-25 09:01:37.157074] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c29 00:19:30.368 [2024-07-25 09:01:37.157137] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c29 00:19:30.368 [2024-07-25 09:01:37.157201] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c29 00:19:30.368 [2024-07-25 09:01:37.157259] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c29 00:19:30.368 [2024-07-25 09:01:37.170215] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c29 00:19:30.368 [2024-07-25 09:01:37.171853] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c29 00:19:30.368 [2024-07-25 09:01:37.171930] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c29 00:19:30.368 [2024-07-25 09:01:37.178656] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c29 00:19:30.368 [2024-07-25 09:01:37.178729] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c2a 00:19:30.368 09:01:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:19:30.368 09:01:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:30.368 [2024-07-25 09:01:37.182985] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c2a 00:19:30.368 [2024-07-25 09:01:37.185186] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c2a 00:19:30.368 [2024-07-25 09:01:37.186355] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c2a 00:19:30.368 [2024-07-25 09:01:37.188557] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c2a 00:19:30.368 [2024-07-25 09:01:37.189848] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c2a 00:19:30.368 [2024-07-25 09:01:37.191156] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c2a 00:19:30.368 [2024-07-25 09:01:37.208349] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c2a 00:19:30.368 [2024-07-25 09:01:37.208463] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c2a 00:19:30.368 [2024-07-25 09:01:37.209441] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c2a 00:19:30.368 [2024-07-25 09:01:37.209969] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c2a 00:19:30.368 [2024-07-25 09:01:37.210510] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c2a 00:19:30.368 [2024-07-25 09:01:37.211032] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c2a 00:19:30.368 [2024-07-25 09:01:37.211584] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c2a 00:19:30.368 [2024-07-25 09:01:37.212092] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c2a 00:19:30.368 [2024-07-25 09:01:37.212654] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c2a 00:19:30.626 09:01:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:19:30.626 09:01:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:30.626 fio: io_u error on file /dev/sda: Input/output error: write offset=72351744, buflen=1048576 00:19:30.627 fio: io_u error on file /dev/sda: Input/output error: write offset=75497472, buflen=1048576 00:19:31.193 09:01:38 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=76546048, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=77594624, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=78643200, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=79691776, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=80740352, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=81788928, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=82837504, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=83886080, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=84934656, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=85983232, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=87031808, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=88080384, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=89128960, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=90177536, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=73400320, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=74448896, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=54525952, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=91226112, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=92274688, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=93323264, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=94371840, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=55574528, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=56623104, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=57671680, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=58720256, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=59768832, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=95420416, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=96468992, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=60817408, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=97517568, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=98566144, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=99614720, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=100663296, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=61865984, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=62914560, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=101711872, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=102760448, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=63963136, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=103809024, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=104857600, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=65011712, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=66060288, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=90177536, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=67108864, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=91226112, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=105906176, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=92274688, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=68157440, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=69206016, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=93323264, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=70254592, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=124780544, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=125829120, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=126877696, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=71303168, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=94371840, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=106954752, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=108003328, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=95420416, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=72351744, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=127926272, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=73400320, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=128974848, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=109051904, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=110100480, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=130023424, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=131072000, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=74448896, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=75497472, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=132120576, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=76546048, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: read offset=96468992, buflen=1048576 00:19:31.193 fio: io_u error on file /dev/sda: Input/output error: write offset=133169152, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=97517568, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=77594624, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=111149056, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=0, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=112197632, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=78643200, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=1048576, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=113246208, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=2097152, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=98566144, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=99614720, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=79691776, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=114294784, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=3145728, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=115343360, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=100663296, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=101711872, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=80740352, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=4194304, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=81788928, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=5242880, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=82837504, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=116391936, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=117440512, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=102760448, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=83886080, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=84934656, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=6291456, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=118489088, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=103809024, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=7340032, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=85983232, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=8388608, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=104857600, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=87031808, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=105906176, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=119537664, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=9437184, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=88080384, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=106954752, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=120586240, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=108003328, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=121634816, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=109051904, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=89128960, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=110100480, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=122683392, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=10485760, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: write offset=123731968, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=111149056, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=112197632, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=113246208, buflen=1048576 00:19:31.194 fio: io_u error on file /dev/sda: Input/output error: read offset=114294784, buflen=1048576 00:19:31.194 fio: pid=76166, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:31.451 [2024-07-25 09:01:38.323544] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Malloc2) received event(SPDK_BDEV_EVENT_REMOVE) 00:19:31.451 [2024-07-25 09:01:38.324791] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1d 00:19:31.451 [2024-07-25 09:01:38.325405] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1d 00:19:31.451 [2024-07-25 09:01:38.325996] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1d 00:19:31.451 [2024-07-25 09:01:38.326939] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1d 00:19:34.739 [2024-07-25 09:01:41.742117] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1d 00:19:34.739 [2024-07-25 09:01:41.742862] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1d 00:19:34.739 [2024-07-25 09:01:41.743408] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1d 00:19:34.739 [2024-07-25 09:01:41.743510] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1d 00:19:34.739 [2024-07-25 09:01:41.743574] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1d 00:19:34.739 [2024-07-25 09:01:41.743642] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1d 00:19:34.739 [2024-07-25 09:01:41.743701] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1d 00:19:34.739 [2024-07-25 09:01:41.743763] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1d 00:19:34.739 [2024-07-25 09:01:41.748901] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1e 00:19:34.739 [2024-07-25 09:01:41.756924] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1e 00:19:34.739 [2024-07-25 09:01:41.758709] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1e 00:19:34.739 [2024-07-25 09:01:41.760060] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1e 00:19:34.739 [2024-07-25 09:01:41.762328] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1e 00:19:34.739 [2024-07-25 09:01:41.763590] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1e 00:19:34.739 [2024-07-25 09:01:41.764845] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1e 00:19:34.739 [2024-07-25 09:01:41.766140] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1e 00:19:34.739 [2024-07-25 09:01:41.767406] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1e 00:19:34.739 [2024-07-25 09:01:41.768616] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1e 00:19:34.739 [2024-07-25 09:01:41.769854] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1e 00:19:34.739 [2024-07-25 09:01:41.770656] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1e 00:19:34.739 [2024-07-25 09:01:41.771501] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1e 00:19:34.739 [2024-07-25 09:01:41.772326] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1e 00:19:34.739 [2024-07-25 09:01:41.773124] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1e 00:19:34.739 [2024-07-25 09:01:41.773935] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1e 00:19:34.739 [2024-07-25 09:01:41.774811] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1f 00:19:34.739 [2024-07-25 09:01:41.775766] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1f 00:19:34.739 09:01:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@131 -- # fio_status=0 00:19:34.739 09:01:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # wait 76126 00:19:34.740 [2024-07-25 09:01:41.776604] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1f 00:19:34.740 [2024-07-25 09:01:41.777363] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1f 00:19:34.740 [2024-07-25 09:01:41.778098] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1f 00:19:34.740 [2024-07-25 09:01:41.778857] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1f 00:19:34.740 [2024-07-25 09:01:41.779707] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1f 00:19:34.740 [2024-07-25 09:01:41.780222] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1f 00:19:34.740 [2024-07-25 09:01:41.780827] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1f 00:19:34.740 [2024-07-25 09:01:41.780925] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1f 00:19:34.740 [2024-07-25 09:01:41.781863] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1f 00:19:34.740 [2024-07-25 09:01:41.782403] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1f 00:19:34.740 [2024-07-25 09:01:41.782951] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1f 00:19:34.740 [2024-07-25 09:01:41.783500] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1f 00:19:34.740 [2024-07-25 09:01:41.784037] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1f 00:19:34.740 [2024-07-25 09:01:41.784583] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d1f 00:19:34.740 [2024-07-25 09:01:41.785135] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d20 00:19:34.740 [2024-07-25 09:01:41.785691] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d20 00:19:34.740 [2024-07-25 09:01:41.786218] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d20 00:19:34.740 [2024-07-25 09:01:41.786753] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d20 00:19:34.740 [2024-07-25 09:01:41.787285] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d20 00:19:34.740 [2024-07-25 09:01:41.787847] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d20 00:19:34.740 [2024-07-25 09:01:41.788396] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d20 00:19:34.740 [2024-07-25 09:01:41.788928] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d20 00:19:34.740 [2024-07-25 09:01:41.789484] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d20 00:19:34.740 [2024-07-25 09:01:41.790070] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d20 00:19:34.740 [2024-07-25 09:01:41.790146] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d20 00:19:34.740 [2024-07-25 09:01:41.791128] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d20 00:19:34.740 [2024-07-25 09:01:41.791658] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d20 00:19:34.740 [2024-07-25 09:01:41.792188] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d20 00:19:34.740 [2024-07-25 09:01:41.792734] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d20 00:19:34.740 [2024-07-25 09:01:41.793323] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d20 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=696254464, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=697303040, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=692060160, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=693108736, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=694157312, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=695205888, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=630194176, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=631242752, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=632291328, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=698351616, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=699400192, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=700448768, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=633339904, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=634388480, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=635437056, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=701497344, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=702545920, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=636485632, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=703594496, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=637534208, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=704643072, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=638582784, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=639631360, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=705691648, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=640679936, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=706740224, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=707788800, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=641728512, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=708837376, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=642777088, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=643825664, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=709885952, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=710934528, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=711983104, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=644874240, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=645922816, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=646971392, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=648019968, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=649068544, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=713031680, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=650117120, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=714080256, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: read offset=651165696, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=715128832, buflen=1048576 00:19:34.740 fio: io_u error on file /dev/sdb: Input/output error: write offset=716177408, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=717225984, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=652214272, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=718274560, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=653262848, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=654311424, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=655360000, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=719323136, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=720371712, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=656408576, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=657457152, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=658505728, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=721420288, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=659554304, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=660602880, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=722468864, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=723517440, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=661651456, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=662700032, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=663748608, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=724566016, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=664797184, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=665845760, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=666894336, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=725614592, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=726663168, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=667942912, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=668991488, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=727711744, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=728760320, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=670040064, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=729808896, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=671088640, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=730857472, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=731906048, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=732954624, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=734003200, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=672137216, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=673185792, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=735051776, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=674234368, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=675282944, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=676331520, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=677380096, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=736100352, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=678428672, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=737148928, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=679477248, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=738197504, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=680525824, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=681574400, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=739246080, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=682622976, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=683671552, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=740294656, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=741343232, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=684720128, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=685768704, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=686817280, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=742391808, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=743440384, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=744488960, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=687865856, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=688914432, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=689963008, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=691011584, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=692060160, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=693108736, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=745537536, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=694157312, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=695205888, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=696254464, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=746586112, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=747634688, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=748683264, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=749731840, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: read offset=697303040, buflen=1048576 00:19:34.741 fio: io_u error on file /dev/sdb: Input/output error: write offset=750780416, buflen=1048576 00:19:34.742 fio: io_u error on file /dev/sdb: Input/output error: write offset=751828992, buflen=1048576 00:19:34.742 fio: io_u error on file /dev/sdb: Input/output error: read offset=698351616, buflen=1048576 00:19:34.742 fio: io_u error on file /dev/sdb: Input/output error: read offset=699400192, buflen=1048576 00:19:34.742 fio: io_u error on file /dev/sdb: Input/output error: read offset=700448768, buflen=1048576 00:19:34.742 fio: io_u error on file /dev/sdb: Input/output error: read offset=701497344, buflen=1048576 00:19:34.742 fio: io_u error on file /dev/sdb: Input/output error: read offset=702545920, buflen=1048576 00:19:34.742 fio: pid=76167, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:34.742 00:19:34.742 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=76166: Thu Jul 25 09:01:41 2024 00:19:34.742 read: IOPS=129, BW=114MiB/s (120MB/s)(436MiB/3821msec) 00:19:34.742 slat (usec): min=25, max=285774, avg=3487.75, stdev=14143.71 00:19:34.742 clat (msec): min=264, max=715, avg=374.62, stdev=64.74 00:19:34.742 lat (msec): min=264, max=726, avg=377.43, stdev=65.04 00:19:34.742 clat percentiles (msec): 00:19:34.742 | 1.00th=[ 284], 5.00th=[ 309], 10.00th=[ 321], 20.00th=[ 334], 00:19:34.742 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 372], 60.00th=[ 376], 00:19:34.742 | 70.00th=[ 384], 80.00th=[ 397], 90.00th=[ 414], 95.00th=[ 435], 00:19:34.742 | 99.00th=[ 693], 99.50th=[ 693], 99.90th=[ 718], 99.95th=[ 718], 00:19:34.742 | 99.99th=[ 718] 00:19:34.742 bw ( KiB/s): min=26624, max=184320, per=89.62%, avg=127579.29, stdev=59035.19, samples=7 00:19:34.742 iops : min= 26, max= 180, avg=124.57, stdev=57.67, samples=7 00:19:34.742 write: IOPS=136, BW=119MiB/s (124MB/s)(453MiB/3821msec); 0 zone resets 00:19:34.742 slat (usec): min=57, max=74152, avg=3066.52, stdev=6388.67 00:19:34.742 clat (msec): min=327, max=730, avg=419.23, stdev=53.81 00:19:34.742 lat (msec): min=327, max=741, avg=422.29, stdev=54.07 00:19:34.742 clat percentiles (msec): 00:19:34.742 | 1.00th=[ 342], 5.00th=[ 355], 10.00th=[ 368], 20.00th=[ 380], 00:19:34.742 | 30.00th=[ 397], 40.00th=[ 409], 50.00th=[ 418], 60.00th=[ 426], 00:19:34.742 | 70.00th=[ 435], 80.00th=[ 443], 90.00th=[ 464], 95.00th=[ 477], 00:19:34.742 | 99.00th=[ 701], 99.50th=[ 735], 99.90th=[ 735], 99.95th=[ 735], 00:19:34.742 | 99.99th=[ 735] 00:19:34.742 bw ( KiB/s): min=18432, max=184320, per=86.75%, avg=132551.14, stdev=66821.23, samples=7 00:19:34.742 iops : min= 18, max= 180, avg=129.43, stdev=65.28, samples=7 00:19:34.742 lat (msec) : 500=85.05%, 750=2.36% 00:19:34.742 cpu : usr=1.36%, sys=2.33%, ctx=377, majf=0, minf=2 00:19:34.742 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:19:34.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.742 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.742 issued rwts: total=494,523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.742 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=76167: Thu Jul 25 09:01:41 2024 00:19:34.742 read: IOPS=89, BW=80.6MiB/s (84.5MB/s)(601MiB/7459msec) 00:19:34.742 slat (usec): min=26, max=3464.2k, avg=8329.74, stdev=134300.70 00:19:34.742 clat (msec): min=135, max=3615, avg=459.65, stdev=528.18 00:19:34.742 lat (msec): min=135, max=3615, avg=463.08, stdev=528.05 00:19:34.742 clat percentiles (msec): 00:19:34.742 | 1.00th=[ 144], 5.00th=[ 153], 10.00th=[ 169], 20.00th=[ 321], 00:19:34.742 | 30.00th=[ 351], 40.00th=[ 376], 50.00th=[ 388], 60.00th=[ 405], 00:19:34.742 | 70.00th=[ 460], 80.00th=[ 485], 90.00th=[ 514], 95.00th=[ 542], 00:19:34.742 | 99.00th=[ 3574], 99.50th=[ 3608], 99.90th=[ 3608], 99.95th=[ 3608], 00:19:34.742 | 99.99th=[ 3608] 00:19:34.742 bw ( KiB/s): min=96256, max=229376, per=100.00%, avg=149731.38, stdev=44860.86, samples=8 00:19:34.742 iops : min= 94, max= 224, avg=146.12, stdev=43.90, samples=8 00:19:34.742 write: IOPS=96, BW=88.5MiB/s (92.8MB/s)(660MiB/7459msec); 0 zone resets 00:19:34.742 slat (usec): min=60, max=43921, avg=2590.48, stdev=5183.39 00:19:34.742 clat (msec): min=162, max=3686, avg=711.32, stdev=892.02 00:19:34.742 lat (msec): min=163, max=3686, avg=714.09, stdev=891.66 00:19:34.742 clat percentiles (msec): 00:19:34.742 | 1.00th=[ 176], 5.00th=[ 215], 10.00th=[ 363], 20.00th=[ 397], 00:19:34.742 | 30.00th=[ 409], 40.00th=[ 422], 50.00th=[ 439], 60.00th=[ 472], 00:19:34.742 | 70.00th=[ 523], 80.00th=[ 550], 90.00th=[ 584], 95.00th=[ 3608], 00:19:34.742 | 99.00th=[ 3675], 99.50th=[ 3675], 99.90th=[ 3675], 99.95th=[ 3675], 00:19:34.742 | 99.99th=[ 3675] 00:19:34.742 bw ( KiB/s): min=96063, max=217088, per=100.00%, avg=154599.88, stdev=38403.45, samples=8 00:19:34.742 iops : min= 93, max= 212, avg=150.88, stdev=37.68, samples=8 00:19:34.742 lat (msec) : 250=9.58%, 500=57.96%, 750=18.07%, >=2000=5.18% 00:19:34.742 cpu : usr=0.93%, sys=1.60%, ctx=452, majf=0, minf=1 00:19:34.742 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.5% 00:19:34.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.742 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.742 issued rwts: total=671,718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.742 00:19:34.742 Run status group 0 (all jobs): 00:19:34.742 READ: bw=139MiB/s (146MB/s), 80.6MiB/s-114MiB/s (84.5MB/s-120MB/s), io=1037MiB (1087MB), run=3821-7459msec 00:19:34.742 WRITE: bw=149MiB/s (156MB/s), 88.5MiB/s-119MiB/s (92.8MB/s-124MB/s), io=1113MiB (1167MB), run=3821-7459msec 00:19:34.742 00:19:34.742 Disk stats (read/write): 00:19:34.742 sda: ios=538/523, merge=0/0, ticks=81389/110314, in_queue=191702, util=89.15% 00:19:34.742 sdb: ios=650/663, merge=0/0, ticks=86375/129310, in_queue=215686, util=93.36% 00:19:35.001 iscsi hotplug test: fio failed as expected 00:19:35.001 Cleaning up iSCSI connection 00:19:35.001 09:01:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # fio_status=2 00:19:35.001 09:01:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@134 -- # '[' 2 -eq 0 ']' 00:19:35.001 09:01:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@138 -- # echo 'iscsi hotplug test: fio failed as expected' 00:19:35.001 09:01:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@141 -- # iscsicleanup 00:19:35.001 09:01:41 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:19:35.001 09:01:41 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:19:35.001 Logging out of session [sid: 19, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:19:35.001 Logout of [sid: 19, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:19:35.001 09:01:41 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:19:35.001 09:01:41 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@985 -- # rm -rf 00:19:35.001 09:01:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2016-06.io.spdk:Target3 00:19:35.259 09:01:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@144 -- # delete_tmp_files 00:19:35.259 09:01:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@14 -- # rm -f /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/iscsi2.json 00:19:35.259 09:01:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@15 -- # rm -f ./local-job0-0-verify.state 00:19:35.259 09:01:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@16 -- # rm -f ./local-job1-1-verify.state 00:19:35.259 09:01:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:35.259 09:01:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@148 -- # killprocess 72301 00:19:35.259 09:01:42 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@950 -- # '[' -z 72301 ']' 00:19:35.259 09:01:42 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # kill -0 72301 00:19:35.259 09:01:42 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@955 -- # uname 00:19:35.259 09:01:42 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:35.260 09:01:42 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72301 00:19:35.260 killing process with pid 72301 00:19:35.260 09:01:42 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:35.260 09:01:42 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:35.260 09:01:42 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72301' 00:19:35.260 09:01:42 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@969 -- # kill 72301 00:19:35.260 09:01:42 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@974 -- # wait 72301 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@150 -- # iscsitestfini 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:19:38.552 00:19:38.552 real 5m27.902s 00:19:38.552 user 3m37.091s 00:19:38.552 sys 1m52.434s 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:19:38.552 ************************************ 00:19:38.552 END TEST iscsi_tgt_fio 00:19:38.552 ************************************ 00:19:38.552 09:01:45 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@38 -- # run_test iscsi_tgt_qos /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:19:38.552 09:01:45 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:38.552 09:01:45 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:38.552 09:01:45 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:19:38.552 ************************************ 00:19:38.552 START TEST iscsi_tgt_qos 00:19:38.552 ************************************ 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:19:38.552 * Looking for test storage... 00:19:38.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@11 -- # iscsitestinit 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@44 -- # '[' -z 10.0.0.1 ']' 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@49 -- # '[' -z 10.0.0.2 ']' 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@54 -- # MALLOC_BDEV_SIZE=64 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@55 -- # MALLOC_BLOCK_SIZE=512 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@56 -- # IOPS_RESULT= 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@57 -- # BANDWIDTH_RESULT= 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@58 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@60 -- # timing_enter start_iscsi_tgt 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@62 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@63 -- # pid=76382 00:19:38.552 Process pid: 76382 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@64 -- # echo 'Process pid: 76382' 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@65 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@66 -- # waitforlisten 76382 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@831 -- # '[' -z 76382 ']' 00:19:38.552 09:01:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.553 09:01:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.553 09:01:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.553 09:01:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.553 09:01:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:38.553 [2024-07-25 09:01:45.469062] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:38.553 [2024-07-25 09:01:45.469210] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76382 ] 00:19:38.553 [2024-07-25 09:01:45.638518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.814 [2024-07-25 09:01:45.914048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.252 09:01:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.252 09:01:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@864 -- # return 0 00:19:40.252 iscsi_tgt is listening. Running tests... 00:19:40.252 09:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@67 -- # echo 'iscsi_tgt is listening. Running tests...' 00:19:40.252 09:01:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@69 -- # timing_exit start_iscsi_tgt 00:19:40.252 09:01:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:40.252 09:01:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@71 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@72 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@73 -- # rpc_cmd bdev_malloc_create 64 512 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:40.252 Malloc0 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@78 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias Malloc0:0 1:2 64 -d 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.252 09:01:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@79 -- # sleep 1 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@81 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:19:41.187 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@82 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:19:41.187 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:19:41.187 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:19:41.187 [2024-07-25 09:01:48.215258] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@84 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@87 -- # run_fio Malloc0 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:19:41.187 "tick_rate": 2290000000, 00:19:41.187 "ticks": 2419728773248, 00:19:41.187 "bdevs": [ 00:19:41.187 { 00:19:41.187 "name": "Malloc0", 00:19:41.187 "bytes_read": 41472, 00:19:41.187 "num_read_ops": 4, 00:19:41.187 "bytes_written": 0, 00:19:41.187 "num_write_ops": 0, 00:19:41.187 "bytes_unmapped": 0, 00:19:41.187 "num_unmap_ops": 0, 00:19:41.187 "bytes_copied": 0, 00:19:41.187 "num_copy_ops": 0, 00:19:41.187 "read_latency_ticks": 1561642, 00:19:41.187 "max_read_latency_ticks": 577256, 00:19:41.187 "min_read_latency_ticks": 35154, 00:19:41.187 "write_latency_ticks": 0, 00:19:41.187 "max_write_latency_ticks": 0, 00:19:41.187 "min_write_latency_ticks": 0, 00:19:41.187 "unmap_latency_ticks": 0, 00:19:41.187 "max_unmap_latency_ticks": 0, 00:19:41.187 "min_unmap_latency_ticks": 0, 00:19:41.187 "copy_latency_ticks": 0, 00:19:41.187 "max_copy_latency_ticks": 0, 00:19:41.187 "min_copy_latency_ticks": 0, 00:19:41.187 "io_error": {} 00:19:41.187 } 00:19:41.187 ] 00:19:41.187 }' 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=4 00:19:41.187 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:19:41.446 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=41472 00:19:41.446 09:01:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:19:41.446 [global] 00:19:41.446 thread=1 00:19:41.446 invalidate=1 00:19:41.446 rw=randread 00:19:41.446 time_based=1 00:19:41.446 runtime=5 00:19:41.446 ioengine=libaio 00:19:41.446 direct=1 00:19:41.446 bs=1024 00:19:41.446 iodepth=128 00:19:41.446 norandommap=1 00:19:41.446 numjobs=1 00:19:41.446 00:19:41.446 [job0] 00:19:41.446 filename=/dev/sda 00:19:41.446 queue_depth set to 113 (sda) 00:19:41.446 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:19:41.446 fio-3.35 00:19:41.446 Starting 1 thread 00:19:46.817 00:19:46.817 job0: (groupid=0, jobs=1): err= 0: pid=76479: Thu Jul 25 09:01:53 2024 00:19:46.817 read: IOPS=38.3k, BW=37.4MiB/s (39.2MB/s)(187MiB/5003msec) 00:19:46.817 slat (nsec): min=1079, max=1348.3k, avg=24168.08, stdev=68256.92 00:19:46.817 clat (usec): min=1276, max=5302, avg=3319.51, stdev=202.74 00:19:46.817 lat (usec): min=1282, max=5315, avg=3343.68, stdev=193.00 00:19:46.817 clat percentiles (usec): 00:19:46.817 | 1.00th=[ 2802], 5.00th=[ 3032], 10.00th=[ 3097], 20.00th=[ 3195], 00:19:46.817 | 30.00th=[ 3261], 40.00th=[ 3294], 50.00th=[ 3326], 60.00th=[ 3326], 00:19:46.817 | 70.00th=[ 3359], 80.00th=[ 3425], 90.00th=[ 3556], 95.00th=[ 3654], 00:19:46.817 | 99.00th=[ 3851], 99.50th=[ 4015], 99.90th=[ 4555], 99.95th=[ 4752], 00:19:46.817 | 99.99th=[ 5145] 00:19:46.817 bw ( KiB/s): min=37740, max=39238, per=100.00%, avg=38279.78, stdev=535.15, samples=9 00:19:46.817 iops : min=37740, max=39238, avg=38279.78, stdev=535.15, samples=9 00:19:46.817 lat (msec) : 2=0.04%, 4=99.42%, 10=0.54% 00:19:46.817 cpu : usr=6.88%, sys=21.09%, ctx=114851, majf=0, minf=32 00:19:46.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:46.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:46.817 issued rwts: total=191421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.817 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:46.817 00:19:46.817 Run status group 0 (all jobs): 00:19:46.817 READ: bw=37.4MiB/s (39.2MB/s), 37.4MiB/s-37.4MiB/s (39.2MB/s-39.2MB/s), io=187MiB (196MB), run=5003-5003msec 00:19:46.817 00:19:46.817 Disk stats (read/write): 00:19:46.817 sda: ios=187259/0, merge=0/0, ticks=527645/0, in_queue=527645, util=98.12% 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:19:46.817 "tick_rate": 2290000000, 00:19:46.817 "ticks": 2432255989030, 00:19:46.817 "bdevs": [ 00:19:46.817 { 00:19:46.817 "name": "Malloc0", 00:19:46.817 "bytes_read": 197125632, 00:19:46.817 "num_read_ops": 191478, 00:19:46.817 "bytes_written": 0, 00:19:46.817 "num_write_ops": 0, 00:19:46.817 "bytes_unmapped": 0, 00:19:46.817 "num_unmap_ops": 0, 00:19:46.817 "bytes_copied": 0, 00:19:46.817 "num_copy_ops": 0, 00:19:46.817 "read_latency_ticks": 59310852580, 00:19:46.817 "max_read_latency_ticks": 618658, 00:19:46.817 "min_read_latency_ticks": 14678, 00:19:46.817 "write_latency_ticks": 0, 00:19:46.817 "max_write_latency_ticks": 0, 00:19:46.817 "min_write_latency_ticks": 0, 00:19:46.817 "unmap_latency_ticks": 0, 00:19:46.817 "max_unmap_latency_ticks": 0, 00:19:46.817 "min_unmap_latency_ticks": 0, 00:19:46.817 "copy_latency_ticks": 0, 00:19:46.817 "max_copy_latency_ticks": 0, 00:19:46.817 "min_copy_latency_ticks": 0, 00:19:46.817 "io_error": {} 00:19:46.817 } 00:19:46.817 ] 00:19:46.817 }' 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=191478 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=197125632 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=38294 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=39416832 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@90 -- # IOPS_LIMIT=19147 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@91 -- # BANDWIDTH_LIMIT=19708416 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@94 -- # READ_BANDWIDTH_LIMIT=9854208 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@98 -- # IOPS_LIMIT=19000 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@99 -- # BANDWIDTH_LIMIT_MB=18 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@100 -- # BANDWIDTH_LIMIT=18874368 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@101 -- # READ_BANDWIDTH_LIMIT_MB=9 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@102 -- # READ_BANDWIDTH_LIMIT=9437184 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@105 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 19000 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@106 -- # run_fio Malloc0 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.817 09:01:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:46.818 09:01:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.818 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:19:46.818 "tick_rate": 2290000000, 00:19:46.818 "ticks": 2432530878032, 00:19:46.818 "bdevs": [ 00:19:46.818 { 00:19:46.818 "name": "Malloc0", 00:19:46.818 "bytes_read": 197125632, 00:19:46.818 "num_read_ops": 191478, 00:19:46.818 "bytes_written": 0, 00:19:46.818 "num_write_ops": 0, 00:19:46.818 "bytes_unmapped": 0, 00:19:46.818 "num_unmap_ops": 0, 00:19:46.818 "bytes_copied": 0, 00:19:46.818 "num_copy_ops": 0, 00:19:46.818 "read_latency_ticks": 59310852580, 00:19:46.818 "max_read_latency_ticks": 618658, 00:19:46.818 "min_read_latency_ticks": 14678, 00:19:46.818 "write_latency_ticks": 0, 00:19:46.818 "max_write_latency_ticks": 0, 00:19:46.818 "min_write_latency_ticks": 0, 00:19:46.818 "unmap_latency_ticks": 0, 00:19:46.818 "max_unmap_latency_ticks": 0, 00:19:46.818 "min_unmap_latency_ticks": 0, 00:19:46.818 "copy_latency_ticks": 0, 00:19:46.818 "max_copy_latency_ticks": 0, 00:19:46.818 "min_copy_latency_ticks": 0, 00:19:46.818 "io_error": {} 00:19:46.818 } 00:19:46.818 ] 00:19:46.818 }' 00:19:46.818 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:19:46.818 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=191478 00:19:46.818 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:19:46.818 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=197125632 00:19:46.818 09:01:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:19:47.077 [global] 00:19:47.077 thread=1 00:19:47.077 invalidate=1 00:19:47.077 rw=randread 00:19:47.077 time_based=1 00:19:47.077 runtime=5 00:19:47.077 ioengine=libaio 00:19:47.077 direct=1 00:19:47.077 bs=1024 00:19:47.077 iodepth=128 00:19:47.077 norandommap=1 00:19:47.077 numjobs=1 00:19:47.077 00:19:47.077 [job0] 00:19:47.077 filename=/dev/sda 00:19:47.077 queue_depth set to 113 (sda) 00:19:47.077 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:19:47.077 fio-3.35 00:19:47.077 Starting 1 thread 00:19:52.358 00:19:52.358 job0: (groupid=0, jobs=1): err= 0: pid=76571: Thu Jul 25 09:01:59 2024 00:19:52.358 read: IOPS=19.0k, BW=18.6MiB/s (19.5MB/s)(93.1MiB/5006msec) 00:19:52.358 slat (nsec): min=963, max=1491.7k, avg=49928.65, stdev=175321.06 00:19:52.358 clat (usec): min=1191, max=11846, avg=6671.80, stdev=429.41 00:19:52.358 lat (usec): min=1200, max=11863, avg=6721.73, stdev=413.97 00:19:52.358 clat percentiles (usec): 00:19:52.358 | 1.00th=[ 5735], 5.00th=[ 5997], 10.00th=[ 6063], 20.00th=[ 6194], 00:19:52.358 | 30.00th=[ 6325], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 6915], 00:19:52.358 | 70.00th=[ 6980], 80.00th=[ 6980], 90.00th=[ 7046], 95.00th=[ 7046], 00:19:52.358 | 99.00th=[ 7242], 99.50th=[ 7373], 99.90th=[ 7832], 99.95th=[ 9634], 00:19:52.358 | 99.99th=[11731] 00:19:52.358 bw ( KiB/s): min=19010, max=19076, per=100.00%, avg=19059.78, stdev=22.84, samples=9 00:19:52.358 iops : min=19010, max=19076, avg=19059.78, stdev=22.84, samples=9 00:19:52.358 lat (msec) : 2=0.02%, 4=0.05%, 10=99.90%, 20=0.03% 00:19:52.358 cpu : usr=4.36%, sys=13.87%, ctx=52795, majf=0, minf=32 00:19:52.358 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:52.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:52.358 issued rwts: total=95285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:52.358 00:19:52.358 Run status group 0 (all jobs): 00:19:52.358 READ: bw=18.6MiB/s (19.5MB/s), 18.6MiB/s-18.6MiB/s (19.5MB/s-19.5MB/s), io=93.1MiB (97.6MB), run=5006-5006msec 00:19:52.358 00:19:52.358 Disk stats (read/write): 00:19:52.358 sda: ios=93157/0, merge=0/0, ticks=534712/0, in_queue=534712, util=98.14% 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:19:52.358 "tick_rate": 2290000000, 00:19:52.358 "ticks": 2445001644212, 00:19:52.358 "bdevs": [ 00:19:52.358 { 00:19:52.358 "name": "Malloc0", 00:19:52.358 "bytes_read": 294697472, 00:19:52.358 "num_read_ops": 286763, 00:19:52.358 "bytes_written": 0, 00:19:52.358 "num_write_ops": 0, 00:19:52.358 "bytes_unmapped": 0, 00:19:52.358 "num_unmap_ops": 0, 00:19:52.358 "bytes_copied": 0, 00:19:52.358 "num_copy_ops": 0, 00:19:52.358 "read_latency_ticks": 679722183652, 00:19:52.358 "max_read_latency_ticks": 8559508, 00:19:52.358 "min_read_latency_ticks": 14678, 00:19:52.358 "write_latency_ticks": 0, 00:19:52.358 "max_write_latency_ticks": 0, 00:19:52.358 "min_write_latency_ticks": 0, 00:19:52.358 "unmap_latency_ticks": 0, 00:19:52.358 "max_unmap_latency_ticks": 0, 00:19:52.358 "min_unmap_latency_ticks": 0, 00:19:52.358 "copy_latency_ticks": 0, 00:19:52.358 "max_copy_latency_ticks": 0, 00:19:52.358 "min_copy_latency_ticks": 0, 00:19:52.358 "io_error": {} 00:19:52.358 } 00:19:52.358 ] 00:19:52.358 }' 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=286763 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=294697472 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=19057 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=19514368 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@107 -- # verify_qos_limits 19057 19000 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=19057 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=19000 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@110 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 00:19:52.358 09:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@111 -- # run_fio Malloc0 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:19:52.359 "tick_rate": 2290000000, 00:19:52.359 "ticks": 2445350232758, 00:19:52.359 "bdevs": [ 00:19:52.359 { 00:19:52.359 "name": "Malloc0", 00:19:52.359 "bytes_read": 294697472, 00:19:52.359 "num_read_ops": 286763, 00:19:52.359 "bytes_written": 0, 00:19:52.359 "num_write_ops": 0, 00:19:52.359 "bytes_unmapped": 0, 00:19:52.359 "num_unmap_ops": 0, 00:19:52.359 "bytes_copied": 0, 00:19:52.359 "num_copy_ops": 0, 00:19:52.359 "read_latency_ticks": 679722183652, 00:19:52.359 "max_read_latency_ticks": 8559508, 00:19:52.359 "min_read_latency_ticks": 14678, 00:19:52.359 "write_latency_ticks": 0, 00:19:52.359 "max_write_latency_ticks": 0, 00:19:52.359 "min_write_latency_ticks": 0, 00:19:52.359 "unmap_latency_ticks": 0, 00:19:52.359 "max_unmap_latency_ticks": 0, 00:19:52.359 "min_unmap_latency_ticks": 0, 00:19:52.359 "copy_latency_ticks": 0, 00:19:52.359 "max_copy_latency_ticks": 0, 00:19:52.359 "min_copy_latency_ticks": 0, 00:19:52.359 "io_error": {} 00:19:52.359 } 00:19:52.359 ] 00:19:52.359 }' 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=286763 00:19:52.359 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:19:52.617 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=294697472 00:19:52.617 09:01:59 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:19:52.617 [global] 00:19:52.617 thread=1 00:19:52.617 invalidate=1 00:19:52.617 rw=randread 00:19:52.617 time_based=1 00:19:52.617 runtime=5 00:19:52.617 ioengine=libaio 00:19:52.617 direct=1 00:19:52.617 bs=1024 00:19:52.617 iodepth=128 00:19:52.617 norandommap=1 00:19:52.617 numjobs=1 00:19:52.617 00:19:52.617 [job0] 00:19:52.617 filename=/dev/sda 00:19:52.617 queue_depth set to 113 (sda) 00:19:52.617 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:19:52.617 fio-3.35 00:19:52.617 Starting 1 thread 00:19:57.889 00:19:57.889 job0: (groupid=0, jobs=1): err= 0: pid=76656: Thu Jul 25 09:02:04 2024 00:19:57.889 read: IOPS=39.0k, BW=38.1MiB/s (39.9MB/s)(190MiB/5003msec) 00:19:57.889 slat (nsec): min=944, max=908930, avg=23648.96, stdev=65508.23 00:19:57.889 clat (usec): min=1045, max=6138, avg=3259.63, stdev=204.38 00:19:57.889 lat (usec): min=1048, max=6148, avg=3283.28, stdev=195.60 00:19:57.889 clat percentiles (usec): 00:19:57.889 | 1.00th=[ 2704], 5.00th=[ 2933], 10.00th=[ 3032], 20.00th=[ 3163], 00:19:57.889 | 30.00th=[ 3228], 40.00th=[ 3261], 50.00th=[ 3294], 60.00th=[ 3326], 00:19:57.889 | 70.00th=[ 3326], 80.00th=[ 3359], 90.00th=[ 3392], 95.00th=[ 3458], 00:19:57.889 | 99.00th=[ 3884], 99.50th=[ 4146], 99.90th=[ 4621], 99.95th=[ 5276], 00:19:57.889 | 99.99th=[ 5800] 00:19:57.889 bw ( KiB/s): min=38398, max=40672, per=99.82%, avg=38900.00, stdev=690.73, samples=9 00:19:57.889 iops : min=38398, max=40672, avg=38900.00, stdev=690.73, samples=9 00:19:57.889 lat (msec) : 2=0.03%, 4=99.27%, 10=0.71% 00:19:57.889 cpu : usr=7.28%, sys=21.63%, ctx=119842, majf=0, minf=32 00:19:57.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:57.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:57.889 issued rwts: total=194959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:57.889 00:19:57.889 Run status group 0 (all jobs): 00:19:57.889 READ: bw=38.1MiB/s (39.9MB/s), 38.1MiB/s-38.1MiB/s (39.9MB/s-39.9MB/s), io=190MiB (200MB), run=5003-5003msec 00:19:57.889 00:19:57.889 Disk stats (read/write): 00:19:57.889 sda: ios=190407/0, merge=0/0, ticks=523811/0, in_queue=523811, util=98.14% 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:19:57.889 "tick_rate": 2290000000, 00:19:57.889 "ticks": 2457853984904, 00:19:57.889 "bdevs": [ 00:19:57.889 { 00:19:57.889 "name": "Malloc0", 00:19:57.889 "bytes_read": 494335488, 00:19:57.889 "num_read_ops": 481722, 00:19:57.889 "bytes_written": 0, 00:19:57.889 "num_write_ops": 0, 00:19:57.889 "bytes_unmapped": 0, 00:19:57.889 "num_unmap_ops": 0, 00:19:57.889 "bytes_copied": 0, 00:19:57.889 "num_copy_ops": 0, 00:19:57.889 "read_latency_ticks": 739023889696, 00:19:57.889 "max_read_latency_ticks": 8559508, 00:19:57.889 "min_read_latency_ticks": 12658, 00:19:57.889 "write_latency_ticks": 0, 00:19:57.889 "max_write_latency_ticks": 0, 00:19:57.889 "min_write_latency_ticks": 0, 00:19:57.889 "unmap_latency_ticks": 0, 00:19:57.889 "max_unmap_latency_ticks": 0, 00:19:57.889 "min_unmap_latency_ticks": 0, 00:19:57.889 "copy_latency_ticks": 0, 00:19:57.889 "max_copy_latency_ticks": 0, 00:19:57.889 "min_copy_latency_ticks": 0, 00:19:57.889 "io_error": {} 00:19:57.889 } 00:19:57.889 ] 00:19:57.889 }' 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=481722 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=494335488 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=38991 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=39927603 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@112 -- # '[' 38991 -gt 19000 ']' 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@115 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 19000 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@116 -- # run_fio Malloc0 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:19:57.889 "tick_rate": 2290000000, 00:19:57.889 "ticks": 2458147076978, 00:19:57.889 "bdevs": [ 00:19:57.889 { 00:19:57.889 "name": "Malloc0", 00:19:57.889 "bytes_read": 494335488, 00:19:57.889 "num_read_ops": 481722, 00:19:57.889 "bytes_written": 0, 00:19:57.889 "num_write_ops": 0, 00:19:57.889 "bytes_unmapped": 0, 00:19:57.889 "num_unmap_ops": 0, 00:19:57.889 "bytes_copied": 0, 00:19:57.889 "num_copy_ops": 0, 00:19:57.889 "read_latency_ticks": 739023889696, 00:19:57.889 "max_read_latency_ticks": 8559508, 00:19:57.889 "min_read_latency_ticks": 12658, 00:19:57.889 "write_latency_ticks": 0, 00:19:57.889 "max_write_latency_ticks": 0, 00:19:57.889 "min_write_latency_ticks": 0, 00:19:57.889 "unmap_latency_ticks": 0, 00:19:57.889 "max_unmap_latency_ticks": 0, 00:19:57.889 "min_unmap_latency_ticks": 0, 00:19:57.889 "copy_latency_ticks": 0, 00:19:57.889 "max_copy_latency_ticks": 0, 00:19:57.889 "min_copy_latency_ticks": 0, 00:19:57.889 "io_error": {} 00:19:57.889 } 00:19:57.889 ] 00:19:57.889 }' 00:19:57.889 09:02:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:19:58.147 09:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=481722 00:19:58.147 09:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:19:58.147 09:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=494335488 00:19:58.147 09:02:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:19:58.147 [global] 00:19:58.147 thread=1 00:19:58.147 invalidate=1 00:19:58.147 rw=randread 00:19:58.147 time_based=1 00:19:58.147 runtime=5 00:19:58.147 ioengine=libaio 00:19:58.147 direct=1 00:19:58.147 bs=1024 00:19:58.147 iodepth=128 00:19:58.147 norandommap=1 00:19:58.147 numjobs=1 00:19:58.147 00:19:58.147 [job0] 00:19:58.147 filename=/dev/sda 00:19:58.147 queue_depth set to 113 (sda) 00:19:58.147 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:19:58.147 fio-3.35 00:19:58.147 Starting 1 thread 00:20:03.413 00:20:03.413 job0: (groupid=0, jobs=1): err= 0: pid=76747: Thu Jul 25 09:02:10 2024 00:20:03.413 read: IOPS=19.0k, BW=18.6MiB/s (19.5MB/s)(93.1MiB/5006msec) 00:20:03.413 slat (nsec): min=1155, max=1305.2k, avg=49872.37, stdev=146990.71 00:20:03.413 clat (usec): min=1741, max=11652, avg=6670.18, stdev=405.16 00:20:03.413 lat (usec): min=1753, max=11665, avg=6720.05, stdev=391.56 00:20:03.413 clat percentiles (usec): 00:20:03.413 | 1.00th=[ 5800], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6194], 00:20:03.413 | 30.00th=[ 6390], 40.00th=[ 6783], 50.00th=[ 6849], 60.00th=[ 6915], 00:20:03.413 | 70.00th=[ 6915], 80.00th=[ 6915], 90.00th=[ 6980], 95.00th=[ 7046], 00:20:03.413 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 8455], 99.95th=[ 9503], 00:20:03.413 | 99.99th=[11469] 00:20:03.413 bw ( KiB/s): min=19038, max=19076, per=100.00%, avg=19060.00, stdev=15.72, samples=9 00:20:03.413 iops : min=19038, max=19076, avg=19060.22, stdev=15.86, samples=9 00:20:03.413 lat (msec) : 2=0.05%, 4=0.06%, 10=99.86%, 20=0.03% 00:20:03.413 cpu : usr=4.52%, sys=15.52%, ctx=84191, majf=0, minf=32 00:20:03.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:03.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:03.413 issued rwts: total=95304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.413 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:03.413 00:20:03.413 Run status group 0 (all jobs): 00:20:03.413 READ: bw=18.6MiB/s (19.5MB/s), 18.6MiB/s-18.6MiB/s (19.5MB/s-19.5MB/s), io=93.1MiB (97.6MB), run=5006-5006msec 00:20:03.413 00:20:03.413 Disk stats (read/write): 00:20:03.413 sda: ios=93176/0, merge=0/0, ticks=522624/0, in_queue=522624, util=98.16% 00:20:03.413 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:20:03.413 09:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.413 09:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:03.413 09:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.413 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:20:03.413 "tick_rate": 2290000000, 00:20:03.413 "ticks": 2470679869102, 00:20:03.413 "bdevs": [ 00:20:03.413 { 00:20:03.413 "name": "Malloc0", 00:20:03.413 "bytes_read": 591926784, 00:20:03.413 "num_read_ops": 577026, 00:20:03.413 "bytes_written": 0, 00:20:03.413 "num_write_ops": 0, 00:20:03.413 "bytes_unmapped": 0, 00:20:03.413 "num_unmap_ops": 0, 00:20:03.413 "bytes_copied": 0, 00:20:03.413 "num_copy_ops": 0, 00:20:03.413 "read_latency_ticks": 1363399777624, 00:20:03.413 "max_read_latency_ticks": 8559508, 00:20:03.413 "min_read_latency_ticks": 12658, 00:20:03.413 "write_latency_ticks": 0, 00:20:03.413 "max_write_latency_ticks": 0, 00:20:03.413 "min_write_latency_ticks": 0, 00:20:03.413 "unmap_latency_ticks": 0, 00:20:03.413 "max_unmap_latency_ticks": 0, 00:20:03.413 "min_unmap_latency_ticks": 0, 00:20:03.413 "copy_latency_ticks": 0, 00:20:03.413 "max_copy_latency_ticks": 0, 00:20:03.413 "min_copy_latency_ticks": 0, 00:20:03.413 "io_error": {} 00:20:03.413 } 00:20:03.413 ] 00:20:03.413 }' 00:20:03.413 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:20:03.413 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=577026 00:20:03.413 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:20:03.670 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=591926784 00:20:03.670 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=19060 00:20:03.670 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=19518259 00:20:03.670 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@117 -- # verify_qos_limits 19060 19000 00:20:03.670 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=19060 00:20:03.670 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=19000 00:20:03.670 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:20:03.670 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:20:03.670 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:20:03.670 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:20:03.670 I/O rate limiting tests successful 00:20:03.670 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@119 -- # echo 'I/O rate limiting tests successful' 00:20:03.670 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@122 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 --rw_mbytes_per_sec 18 00:20:03.670 09:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.670 09:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:03.670 09:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.670 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@123 -- # run_fio Malloc0 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:20:03.671 "tick_rate": 2290000000, 00:20:03.671 "ticks": 2471055550146, 00:20:03.671 "bdevs": [ 00:20:03.671 { 00:20:03.671 "name": "Malloc0", 00:20:03.671 "bytes_read": 591926784, 00:20:03.671 "num_read_ops": 577026, 00:20:03.671 "bytes_written": 0, 00:20:03.671 "num_write_ops": 0, 00:20:03.671 "bytes_unmapped": 0, 00:20:03.671 "num_unmap_ops": 0, 00:20:03.671 "bytes_copied": 0, 00:20:03.671 "num_copy_ops": 0, 00:20:03.671 "read_latency_ticks": 1363399777624, 00:20:03.671 "max_read_latency_ticks": 8559508, 00:20:03.671 "min_read_latency_ticks": 12658, 00:20:03.671 "write_latency_ticks": 0, 00:20:03.671 "max_write_latency_ticks": 0, 00:20:03.671 "min_write_latency_ticks": 0, 00:20:03.671 "unmap_latency_ticks": 0, 00:20:03.671 "max_unmap_latency_ticks": 0, 00:20:03.671 "min_unmap_latency_ticks": 0, 00:20:03.671 "copy_latency_ticks": 0, 00:20:03.671 "max_copy_latency_ticks": 0, 00:20:03.671 "min_copy_latency_ticks": 0, 00:20:03.671 "io_error": {} 00:20:03.671 } 00:20:03.671 ] 00:20:03.671 }' 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=577026 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=591926784 00:20:03.671 09:02:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:20:03.671 [global] 00:20:03.671 thread=1 00:20:03.671 invalidate=1 00:20:03.671 rw=randread 00:20:03.671 time_based=1 00:20:03.671 runtime=5 00:20:03.671 ioengine=libaio 00:20:03.671 direct=1 00:20:03.671 bs=1024 00:20:03.671 iodepth=128 00:20:03.671 norandommap=1 00:20:03.671 numjobs=1 00:20:03.671 00:20:03.671 [job0] 00:20:03.671 filename=/dev/sda 00:20:03.671 queue_depth set to 113 (sda) 00:20:03.928 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:20:03.928 fio-3.35 00:20:03.928 Starting 1 thread 00:20:09.193 00:20:09.193 job0: (groupid=0, jobs=1): err= 0: pid=76836: Thu Jul 25 09:02:16 2024 00:20:09.193 read: IOPS=18.5k, BW=18.0MiB/s (18.9MB/s)(90.3MiB/5007msec) 00:20:09.193 slat (nsec): min=1160, max=1633.3k, avg=51192.24, stdev=177482.19 00:20:09.193 clat (usec): min=1229, max=12824, avg=6878.83, stdev=412.62 00:20:09.193 lat (usec): min=1247, max=12839, avg=6930.02, stdev=381.79 00:20:09.193 clat percentiles (usec): 00:20:09.193 | 1.00th=[ 5800], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6587], 00:20:09.193 | 30.00th=[ 6783], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 6980], 00:20:09.193 | 70.00th=[ 7046], 80.00th=[ 7111], 90.00th=[ 7242], 95.00th=[ 7439], 00:20:09.193 | 99.00th=[ 7701], 99.50th=[ 7832], 99.90th=[ 8225], 99.95th=[10552], 00:20:09.193 | 99.99th=[12649] 00:20:09.193 bw ( KiB/s): min=18454, max=18524, per=100.00%, avg=18487.33, stdev=21.17, samples=9 00:20:09.193 iops : min=18454, max=18524, avg=18487.33, stdev=21.17, samples=9 00:20:09.193 lat (msec) : 2=0.03%, 4=0.02%, 10=99.89%, 20=0.06% 00:20:09.193 cpu : usr=4.79%, sys=15.38%, ctx=50961, majf=0, minf=32 00:20:09.193 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:09.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:09.193 issued rwts: total=92445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:09.193 00:20:09.193 Run status group 0 (all jobs): 00:20:09.193 READ: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=90.3MiB (94.7MB), run=5007-5007msec 00:20:09.193 00:20:09.193 Disk stats (read/write): 00:20:09.193 sda: ios=90395/0, merge=0/0, ticks=531638/0, in_queue=531638, util=98.16% 00:20:09.193 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:20:09.193 09:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.193 09:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:09.193 09:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.193 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:20:09.193 "tick_rate": 2290000000, 00:20:09.193 "ticks": 2483559340244, 00:20:09.193 "bdevs": [ 00:20:09.193 { 00:20:09.193 "name": "Malloc0", 00:20:09.193 "bytes_read": 686590464, 00:20:09.193 "num_read_ops": 669471, 00:20:09.193 "bytes_written": 0, 00:20:09.193 "num_write_ops": 0, 00:20:09.193 "bytes_unmapped": 0, 00:20:09.193 "num_unmap_ops": 0, 00:20:09.193 "bytes_copied": 0, 00:20:09.193 "num_copy_ops": 0, 00:20:09.193 "read_latency_ticks": 1965707012074, 00:20:09.193 "max_read_latency_ticks": 8948692, 00:20:09.193 "min_read_latency_ticks": 12658, 00:20:09.193 "write_latency_ticks": 0, 00:20:09.193 "max_write_latency_ticks": 0, 00:20:09.193 "min_write_latency_ticks": 0, 00:20:09.193 "unmap_latency_ticks": 0, 00:20:09.193 "max_unmap_latency_ticks": 0, 00:20:09.193 "min_unmap_latency_ticks": 0, 00:20:09.193 "copy_latency_ticks": 0, 00:20:09.193 "max_copy_latency_ticks": 0, 00:20:09.193 "min_copy_latency_ticks": 0, 00:20:09.193 "io_error": {} 00:20:09.193 } 00:20:09.193 ] 00:20:09.193 }' 00:20:09.193 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:20:09.193 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=669471 00:20:09.193 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:20:09.193 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=686590464 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=18489 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=18932736 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@124 -- # verify_qos_limits 18932736 18874368 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=18932736 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=18874368 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@127 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 0 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@128 -- # run_fio Malloc0 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:20:09.194 "tick_rate": 2290000000, 00:20:09.194 "ticks": 2483889136762, 00:20:09.194 "bdevs": [ 00:20:09.194 { 00:20:09.194 "name": "Malloc0", 00:20:09.194 "bytes_read": 686590464, 00:20:09.194 "num_read_ops": 669471, 00:20:09.194 "bytes_written": 0, 00:20:09.194 "num_write_ops": 0, 00:20:09.194 "bytes_unmapped": 0, 00:20:09.194 "num_unmap_ops": 0, 00:20:09.194 "bytes_copied": 0, 00:20:09.194 "num_copy_ops": 0, 00:20:09.194 "read_latency_ticks": 1965707012074, 00:20:09.194 "max_read_latency_ticks": 8948692, 00:20:09.194 "min_read_latency_ticks": 12658, 00:20:09.194 "write_latency_ticks": 0, 00:20:09.194 "max_write_latency_ticks": 0, 00:20:09.194 "min_write_latency_ticks": 0, 00:20:09.194 "unmap_latency_ticks": 0, 00:20:09.194 "max_unmap_latency_ticks": 0, 00:20:09.194 "min_unmap_latency_ticks": 0, 00:20:09.194 "copy_latency_ticks": 0, 00:20:09.194 "max_copy_latency_ticks": 0, 00:20:09.194 "min_copy_latency_ticks": 0, 00:20:09.194 "io_error": {} 00:20:09.194 } 00:20:09.194 ] 00:20:09.194 }' 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=669471 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=686590464 00:20:09.194 09:02:16 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:20:09.452 [global] 00:20:09.452 thread=1 00:20:09.452 invalidate=1 00:20:09.452 rw=randread 00:20:09.452 time_based=1 00:20:09.452 runtime=5 00:20:09.452 ioengine=libaio 00:20:09.452 direct=1 00:20:09.452 bs=1024 00:20:09.452 iodepth=128 00:20:09.452 norandommap=1 00:20:09.452 numjobs=1 00:20:09.452 00:20:09.452 [job0] 00:20:09.452 filename=/dev/sda 00:20:09.452 queue_depth set to 113 (sda) 00:20:09.452 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:20:09.452 fio-3.35 00:20:09.452 Starting 1 thread 00:20:14.722 00:20:14.722 job0: (groupid=0, jobs=1): err= 0: pid=76930: Thu Jul 25 09:02:21 2024 00:20:14.722 read: IOPS=35.9k, BW=35.1MiB/s (36.8MB/s)(175MiB/5003msec) 00:20:14.722 slat (nsec): min=1150, max=1497.6k, avg=25789.26, stdev=74227.44 00:20:14.722 clat (usec): min=1004, max=12145, avg=3536.61, stdev=443.99 00:20:14.722 lat (usec): min=1013, max=12160, avg=3562.40, stdev=441.18 00:20:14.722 clat percentiles (usec): 00:20:14.722 | 1.00th=[ 2966], 5.00th=[ 3163], 10.00th=[ 3294], 20.00th=[ 3359], 00:20:14.722 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3523], 60.00th=[ 3556], 00:20:14.722 | 70.00th=[ 3556], 80.00th=[ 3589], 90.00th=[ 3687], 95.00th=[ 3949], 00:20:14.722 | 99.00th=[ 4686], 99.50th=[ 5342], 99.90th=[10421], 99.95th=[10945], 00:20:14.722 | 99.99th=[11994] 00:20:14.722 bw ( KiB/s): min=31238, max=37560, per=99.82%, avg=35855.33, stdev=1835.10, samples=9 00:20:14.722 iops : min=31238, max=37560, avg=35855.33, stdev=1835.10, samples=9 00:20:14.722 lat (msec) : 2=0.02%, 4=95.36%, 10=4.51%, 20=0.12% 00:20:14.722 cpu : usr=6.92%, sys=20.51%, ctx=104644, majf=0, minf=32 00:20:14.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:14.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:14.722 issued rwts: total=179701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:14.722 00:20:14.722 Run status group 0 (all jobs): 00:20:14.722 READ: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=175MiB (184MB), run=5003-5003msec 00:20:14.722 00:20:14.722 Disk stats (read/write): 00:20:14.722 sda: ios=175635/0, merge=0/0, ticks=526626/0, in_queue=526626, util=98.12% 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:20:14.722 "tick_rate": 2290000000, 00:20:14.722 "ticks": 2496381306594, 00:20:14.722 "bdevs": [ 00:20:14.722 { 00:20:14.722 "name": "Malloc0", 00:20:14.722 "bytes_read": 870604288, 00:20:14.722 "num_read_ops": 849172, 00:20:14.722 "bytes_written": 0, 00:20:14.722 "num_write_ops": 0, 00:20:14.722 "bytes_unmapped": 0, 00:20:14.722 "num_unmap_ops": 0, 00:20:14.722 "bytes_copied": 0, 00:20:14.722 "num_copy_ops": 0, 00:20:14.722 "read_latency_ticks": 2023494981428, 00:20:14.722 "max_read_latency_ticks": 8948692, 00:20:14.722 "min_read_latency_ticks": 12658, 00:20:14.722 "write_latency_ticks": 0, 00:20:14.722 "max_write_latency_ticks": 0, 00:20:14.722 "min_write_latency_ticks": 0, 00:20:14.722 "unmap_latency_ticks": 0, 00:20:14.722 "max_unmap_latency_ticks": 0, 00:20:14.722 "min_unmap_latency_ticks": 0, 00:20:14.722 "copy_latency_ticks": 0, 00:20:14.722 "max_copy_latency_ticks": 0, 00:20:14.722 "min_copy_latency_ticks": 0, 00:20:14.722 "io_error": {} 00:20:14.722 } 00:20:14.722 ] 00:20:14.722 }' 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=849172 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=870604288 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=35940 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=36802764 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@129 -- # '[' 36802764 -gt 18874368 ']' 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@132 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 18 --r_mbytes_per_sec 9 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@133 -- # run_fio Malloc0 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:20:14.722 "tick_rate": 2290000000, 00:20:14.722 "ticks": 2496628578586, 00:20:14.722 "bdevs": [ 00:20:14.722 { 00:20:14.722 "name": "Malloc0", 00:20:14.722 "bytes_read": 870604288, 00:20:14.722 "num_read_ops": 849172, 00:20:14.722 "bytes_written": 0, 00:20:14.722 "num_write_ops": 0, 00:20:14.722 "bytes_unmapped": 0, 00:20:14.722 "num_unmap_ops": 0, 00:20:14.722 "bytes_copied": 0, 00:20:14.722 "num_copy_ops": 0, 00:20:14.722 "read_latency_ticks": 2023494981428, 00:20:14.722 "max_read_latency_ticks": 8948692, 00:20:14.722 "min_read_latency_ticks": 12658, 00:20:14.722 "write_latency_ticks": 0, 00:20:14.722 "max_write_latency_ticks": 0, 00:20:14.722 "min_write_latency_ticks": 0, 00:20:14.722 "unmap_latency_ticks": 0, 00:20:14.722 "max_unmap_latency_ticks": 0, 00:20:14.722 "min_unmap_latency_ticks": 0, 00:20:14.722 "copy_latency_ticks": 0, 00:20:14.722 "max_copy_latency_ticks": 0, 00:20:14.722 "min_copy_latency_ticks": 0, 00:20:14.722 "io_error": {} 00:20:14.722 } 00:20:14.722 ] 00:20:14.722 }' 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=849172 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=870604288 00:20:14.722 09:02:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:20:14.980 [global] 00:20:14.980 thread=1 00:20:14.980 invalidate=1 00:20:14.980 rw=randread 00:20:14.980 time_based=1 00:20:14.980 runtime=5 00:20:14.980 ioengine=libaio 00:20:14.980 direct=1 00:20:14.980 bs=1024 00:20:14.980 iodepth=128 00:20:14.980 norandommap=1 00:20:14.980 numjobs=1 00:20:14.980 00:20:14.980 [job0] 00:20:14.980 filename=/dev/sda 00:20:14.980 queue_depth set to 113 (sda) 00:20:14.980 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:20:14.980 fio-3.35 00:20:14.980 Starting 1 thread 00:20:20.310 00:20:20.310 job0: (groupid=0, jobs=1): err= 0: pid=77012: Thu Jul 25 09:02:27 2024 00:20:20.310 read: IOPS=9233, BW=9233KiB/s (9455kB/s)(45.2MiB/5013msec) 00:20:20.310 slat (nsec): min=1157, max=2144.2k, avg=105132.72, stdev=268310.54 00:20:20.310 clat (usec): min=2138, max=25322, avg=13752.97, stdev=653.36 00:20:20.310 lat (usec): min=2148, max=25336, avg=13858.10, stdev=614.49 00:20:20.310 clat percentiles (usec): 00:20:20.310 | 1.00th=[12518], 5.00th=[12911], 10.00th=[13042], 20.00th=[13304], 00:20:20.310 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13960], 60.00th=[13960], 00:20:20.310 | 70.00th=[14091], 80.00th=[14091], 90.00th=[14222], 95.00th=[14222], 00:20:20.310 | 99.00th=[14615], 99.50th=[14746], 99.90th=[20841], 99.95th=[23200], 00:20:20.310 | 99.99th=[25297] 00:20:20.310 bw ( KiB/s): min= 9126, max= 9254, per=99.95%, avg=9229.90, stdev=39.44, samples=10 00:20:20.310 iops : min= 9126, max= 9254, avg=9229.90, stdev=39.44, samples=10 00:20:20.310 lat (msec) : 4=0.05%, 10=0.14%, 20=99.70%, 50=0.11% 00:20:20.310 cpu : usr=2.39%, sys=8.80%, ctx=44900, majf=0, minf=32 00:20:20.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:20.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.310 issued rwts: total=46286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.310 00:20:20.310 Run status group 0 (all jobs): 00:20:20.310 READ: bw=9233KiB/s (9455kB/s), 9233KiB/s-9233KiB/s (9455kB/s-9455kB/s), io=45.2MiB (47.4MB), run=5013-5013msec 00:20:20.310 00:20:20.310 Disk stats (read/write): 00:20:20.310 sda: ios=45223/0, merge=0/0, ticks=541088/0, in_queue=541088, util=98.16% 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:20:20.310 "tick_rate": 2290000000, 00:20:20.310 "ticks": 2509057044216, 00:20:20.310 "bdevs": [ 00:20:20.310 { 00:20:20.310 "name": "Malloc0", 00:20:20.310 "bytes_read": 918001152, 00:20:20.310 "num_read_ops": 895458, 00:20:20.310 "bytes_written": 0, 00:20:20.310 "num_write_ops": 0, 00:20:20.310 "bytes_unmapped": 0, 00:20:20.310 "num_unmap_ops": 0, 00:20:20.310 "bytes_copied": 0, 00:20:20.310 "num_copy_ops": 0, 00:20:20.310 "read_latency_ticks": 2700685594102, 00:20:20.310 "max_read_latency_ticks": 16923070, 00:20:20.310 "min_read_latency_ticks": 12658, 00:20:20.310 "write_latency_ticks": 0, 00:20:20.310 "max_write_latency_ticks": 0, 00:20:20.310 "min_write_latency_ticks": 0, 00:20:20.310 "unmap_latency_ticks": 0, 00:20:20.310 "max_unmap_latency_ticks": 0, 00:20:20.310 "min_unmap_latency_ticks": 0, 00:20:20.310 "copy_latency_ticks": 0, 00:20:20.310 "max_copy_latency_ticks": 0, 00:20:20.310 "min_copy_latency_ticks": 0, 00:20:20.310 "io_error": {} 00:20:20.310 } 00:20:20.310 ] 00:20:20.310 }' 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=895458 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=918001152 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=9257 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=9479372 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@134 -- # verify_qos_limits 9479372 9437184 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=9479372 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=9437184 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:20:20.310 I/O bandwidth limiting tests successful 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@136 -- # echo 'I/O bandwidth limiting tests successful' 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@138 -- # iscsicleanup 00:20:20.310 Cleaning up iSCSI connection 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:20:20.310 Logging out of session [sid: 20, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:20:20.310 Logout of [sid: 20, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@985 -- # rm -rf 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@139 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:Target1 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@141 -- # rm -f ./local-job0-0-verify.state 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@143 -- # killprocess 76382 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@950 -- # '[' -z 76382 ']' 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # kill -0 76382 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@955 -- # uname 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76382 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:20.310 killing process with pid 76382 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76382' 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@969 -- # kill 76382 00:20:20.310 09:02:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@974 -- # wait 76382 00:20:24.526 09:02:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@145 -- # iscsitestfini 00:20:24.526 09:02:30 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:20:24.526 00:20:24.526 real 0m45.649s 00:20:24.526 user 0m41.583s 00:20:24.526 sys 0m12.016s 00:20:24.527 09:02:30 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:24.527 ************************************ 00:20:24.527 END TEST iscsi_tgt_qos 00:20:24.527 09:02:30 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:24.527 ************************************ 00:20:24.527 09:02:30 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@39 -- # run_test iscsi_tgt_ip_migration /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:20:24.527 09:02:30 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:24.527 09:02:30 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:24.527 09:02:30 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:20:24.527 ************************************ 00:20:24.527 START TEST iscsi_tgt_ip_migration 00:20:24.527 ************************************ 00:20:24.527 09:02:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:20:24.527 * Looking for test storage... 00:20:24.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@11 -- # iscsitestinit 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@13 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@14 -- # pids=() 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@16 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:20:24.527 #define SPDK_CONFIG_H 00:20:24.527 #define SPDK_CONFIG_APPS 1 00:20:24.527 #define SPDK_CONFIG_ARCH native 00:20:24.527 #define SPDK_CONFIG_ASAN 1 00:20:24.527 #undef SPDK_CONFIG_AVAHI 00:20:24.527 #undef SPDK_CONFIG_CET 00:20:24.527 #define SPDK_CONFIG_COVERAGE 1 00:20:24.527 #define SPDK_CONFIG_CROSS_PREFIX 00:20:24.527 #undef SPDK_CONFIG_CRYPTO 00:20:24.527 #undef SPDK_CONFIG_CRYPTO_MLX5 00:20:24.527 #undef SPDK_CONFIG_CUSTOMOCF 00:20:24.527 #undef SPDK_CONFIG_DAOS 00:20:24.527 #define SPDK_CONFIG_DAOS_DIR 00:20:24.527 #define SPDK_CONFIG_DEBUG 1 00:20:24.527 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:20:24.527 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:20:24.527 #define SPDK_CONFIG_DPDK_INC_DIR 00:20:24.527 #define SPDK_CONFIG_DPDK_LIB_DIR 00:20:24.527 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:20:24.527 #undef SPDK_CONFIG_DPDK_UADK 00:20:24.527 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:20:24.527 #define SPDK_CONFIG_EXAMPLES 1 00:20:24.527 #undef SPDK_CONFIG_FC 00:20:24.527 #define SPDK_CONFIG_FC_PATH 00:20:24.527 #define SPDK_CONFIG_FIO_PLUGIN 1 00:20:24.527 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:20:24.527 #undef SPDK_CONFIG_FUSE 00:20:24.527 #undef SPDK_CONFIG_FUZZER 00:20:24.527 #define SPDK_CONFIG_FUZZER_LIB 00:20:24.527 #undef SPDK_CONFIG_GOLANG 00:20:24.527 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:20:24.527 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:20:24.527 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:20:24.527 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:20:24.527 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:20:24.527 #undef SPDK_CONFIG_HAVE_LIBBSD 00:20:24.527 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:20:24.527 #define SPDK_CONFIG_IDXD 1 00:20:24.527 #define SPDK_CONFIG_IDXD_KERNEL 1 00:20:24.527 #undef SPDK_CONFIG_IPSEC_MB 00:20:24.527 #define SPDK_CONFIG_IPSEC_MB_DIR 00:20:24.527 #define SPDK_CONFIG_ISAL 1 00:20:24.527 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:20:24.527 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:20:24.527 #define SPDK_CONFIG_LIBDIR 00:20:24.527 #undef SPDK_CONFIG_LTO 00:20:24.527 #define SPDK_CONFIG_MAX_LCORES 128 00:20:24.527 #define SPDK_CONFIG_NVME_CUSE 1 00:20:24.527 #undef SPDK_CONFIG_OCF 00:20:24.527 #define SPDK_CONFIG_OCF_PATH 00:20:24.527 #define SPDK_CONFIG_OPENSSL_PATH 00:20:24.527 #undef SPDK_CONFIG_PGO_CAPTURE 00:20:24.527 #define SPDK_CONFIG_PGO_DIR 00:20:24.527 #undef SPDK_CONFIG_PGO_USE 00:20:24.527 #define SPDK_CONFIG_PREFIX /usr/local 00:20:24.527 #undef SPDK_CONFIG_RAID5F 00:20:24.527 #define SPDK_CONFIG_RBD 1 00:20:24.527 #define SPDK_CONFIG_RDMA 1 00:20:24.527 #define SPDK_CONFIG_RDMA_PROV verbs 00:20:24.527 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:20:24.527 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:20:24.527 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:20:24.527 #define SPDK_CONFIG_SHARED 1 00:20:24.527 #undef SPDK_CONFIG_SMA 00:20:24.527 #define SPDK_CONFIG_TESTS 1 00:20:24.527 #undef SPDK_CONFIG_TSAN 00:20:24.527 #define SPDK_CONFIG_UBLK 1 00:20:24.527 #define SPDK_CONFIG_UBSAN 1 00:20:24.527 #undef SPDK_CONFIG_UNIT_TESTS 00:20:24.527 #undef SPDK_CONFIG_URING 00:20:24.527 #define SPDK_CONFIG_URING_PATH 00:20:24.527 #undef SPDK_CONFIG_URING_ZNS 00:20:24.527 #undef SPDK_CONFIG_USDT 00:20:24.527 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:20:24.527 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:20:24.527 #undef SPDK_CONFIG_VFIO_USER 00:20:24.527 #define SPDK_CONFIG_VFIO_USER_DIR 00:20:24.527 #define SPDK_CONFIG_VHOST 1 00:20:24.527 #define SPDK_CONFIG_VIRTIO 1 00:20:24.527 #undef SPDK_CONFIG_VTUNE 00:20:24.527 #define SPDK_CONFIG_VTUNE_DIR 00:20:24.527 #define SPDK_CONFIG_WERROR 1 00:20:24.527 #define SPDK_CONFIG_WPDK_DIR 00:20:24.527 #undef SPDK_CONFIG_XNVME 00:20:24.527 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:20:24.527 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@17 -- # NETMASK=127.0.0.0/24 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@18 -- # MIGRATION_ADDRESS=127.0.0.2 00:20:24.528 Running ip migration tests 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@56 -- # echo 'Running ip migration tests' 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@57 -- # timing_enter start_iscsi_tgt_0 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@58 -- # rpc_first_addr=/var/tmp/spdk0.sock 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@59 -- # iscsi_tgt_start /var/tmp/spdk0.sock 1 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=77177 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -m 1 --wait-for-rpc 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 77177' 00:20:24.528 Process pid: 77177 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 77177 /var/tmp/spdk0.sock 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@831 -- # '[' -z 77177 ']' 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk0.sock 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:24.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:24.528 09:02:31 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:24.528 [2024-07-25 09:02:31.210846] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:24.528 [2024-07-25 09:02:31.211067] [ DPDK EAL parameters: iscsi --no-shconf -c 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77177 ] 00:20:24.528 [2024-07-25 09:02:31.380075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.786 [2024-07-25 09:02:31.715040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.045 09:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:25.045 09:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@864 -- # return 0 00:20:25.045 09:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_set_options -o 30 -a 64 00:20:25.046 09:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.046 09:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:25.046 09:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.046 09:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk0.sock framework_start_init 00:20:25.046 09:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.046 09:02:32 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.429 iscsi_tgt is listening. Running tests... 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk0.sock bdev_malloc_create 64 512 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:26.429 Malloc0 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@60 -- # timing_exit start_iscsi_tgt_0 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@62 -- # timing_enter start_iscsi_tgt_1 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:26.429 Process pid: 77221 00:20:26.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@63 -- # rpc_second_addr=/var/tmp/spdk1.sock 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@64 -- # iscsi_tgt_start /var/tmp/spdk1.sock 2 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=77221 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 77221' 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 77221 /var/tmp/spdk1.sock 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@831 -- # '[' -z 77221 ']' 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -m 2 --wait-for-rpc 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk1.sock 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:26.429 09:02:33 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:26.429 [2024-07-25 09:02:33.518547] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:26.429 [2024-07-25 09:02:33.518809] [ DPDK EAL parameters: iscsi --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77221 ] 00:20:26.687 [2024-07-25 09:02:33.682844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.946 [2024-07-25 09:02:33.964857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.511 09:02:34 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:27.511 09:02:34 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@864 -- # return 0 00:20:27.511 09:02:34 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_set_options -o 30 -a 64 00:20:27.511 09:02:34 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.511 09:02:34 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:27.511 09:02:34 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.511 09:02:34 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk1.sock framework_start_init 00:20:27.511 09:02:34 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.511 09:02:34 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:28.447 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.447 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:20:28.447 iscsi_tgt is listening. Running tests... 00:20:28.447 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:20:28.447 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.447 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:28.447 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.447 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk1.sock bdev_malloc_create 64 512 00:20:28.447 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.447 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:28.447 Malloc0 00:20:28.447 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.447 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:28.447 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@65 -- # timing_exit start_iscsi_tgt_1 00:20:28.447 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:28.447 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:28.706 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@67 -- # rpc_add_target_node /var/tmp/spdk0.sock 00:20:28.706 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:20:28.706 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:20:28.706 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.706 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:28.706 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.706 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:20:28.706 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.706 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:28.706 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.706 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:20:28.706 09:02:35 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@69 -- # sleep 1 00:20:29.654 09:02:36 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@70 -- # iscsiadm -m discovery -t sendtargets -p 127.0.0.2:3260 00:20:29.654 127.0.0.2:3260,1 iqn.2016-06.io.spdk:target1 00:20:29.654 09:02:36 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@71 -- # sleep 1 00:20:30.590 09:02:37 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@72 -- # iscsiadm -m node --login -p 127.0.0.2:3260 00:20:30.590 Logging in to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:20:30.590 Login to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:20:30.590 09:02:37 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@73 -- # waitforiscsidevices 1 00:20:30.590 09:02:37 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@116 -- # local num=1 00:20:30.590 09:02:37 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:30.590 09:02:37 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:30.590 09:02:37 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:30.590 09:02:37 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:30.590 [2024-07-25 09:02:37.685086] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:30.590 09:02:37 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # n=1 00:20:30.590 09:02:37 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:20:30.590 09:02:37 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@123 -- # return 0 00:20:30.590 09:02:37 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@77 -- # fiopid=77304 00:20:30.590 09:02:37 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 32 -t randrw -r 12 00:20:30.590 09:02:37 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@78 -- # sleep 3 00:20:30.848 [global] 00:20:30.848 thread=1 00:20:30.848 invalidate=1 00:20:30.848 rw=randrw 00:20:30.848 time_based=1 00:20:30.848 runtime=12 00:20:30.848 ioengine=libaio 00:20:30.848 direct=1 00:20:30.848 bs=4096 00:20:30.848 iodepth=32 00:20:30.848 norandommap=1 00:20:30.848 numjobs=1 00:20:30.848 00:20:30.848 [job0] 00:20:30.848 filename=/dev/sda 00:20:30.848 queue_depth set to 113 (sda) 00:20:30.848 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 00:20:30.848 fio-3.35 00:20:30.848 Starting 1 thread 00:20:30.848 [2024-07-25 09:02:37.858198] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:34.131 09:02:40 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@80 -- # rpc_cmd -s /var/tmp/spdk0.sock spdk_kill_instance SIGTERM 00:20:34.131 09:02:40 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.131 09:02:40 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:35.067 09:02:42 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.067 09:02:42 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@81 -- # wait 77177 00:20:36.998 09:02:43 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@83 -- # rpc_add_target_node /var/tmp/spdk1.sock 00:20:36.998 09:02:43 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:20:36.998 09:02:43 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:20:36.998 09:02:43 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.998 09:02:43 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:36.998 09:02:43 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.998 09:02:43 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:20:36.998 09:02:43 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.998 09:02:43 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:36.998 09:02:43 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.998 09:02:43 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:20:36.998 09:02:43 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@85 -- # wait 77304 00:20:43.559 [2024-07-25 09:02:49.965048] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:43.559 00:20:43.559 job0: (groupid=0, jobs=1): err= 0: pid=77338: Thu Jul 25 09:02:50 2024 00:20:43.559 read: IOPS=9248, BW=36.1MiB/s (37.9MB/s)(434MiB/12001msec) 00:20:43.559 slat (nsec): min=1466, max=212686, avg=6936.05, stdev=6106.26 00:20:43.559 clat (usec): min=260, max=5006.3k, avg=1606.86, stdev=54162.43 00:20:43.559 lat (usec): min=283, max=5006.3k, avg=1613.79, stdev=54162.51 00:20:43.559 clat percentiles (usec): 00:20:43.559 | 1.00th=[ 652], 5.00th=[ 766], 10.00th=[ 824], 00:20:43.559 | 20.00th=[ 889], 30.00th=[ 930], 40.00th=[ 971], 00:20:43.559 | 50.00th=[ 1004], 60.00th=[ 1045], 70.00th=[ 1090], 00:20:43.559 | 80.00th=[ 1139], 90.00th=[ 1237], 95.00th=[ 1336], 00:20:43.559 | 99.00th=[ 1467], 99.50th=[ 1516], 99.90th=[ 2114], 00:20:43.559 | 99.95th=[ 3326], 99.99th=[4999611] 00:20:43.559 bw ( KiB/s): min=32384, max=68248, per=100.00%, avg=59132.00, stdev=11624.96, samples=14 00:20:43.559 iops : min= 8096, max=17062, avg=14783.00, stdev=2906.24, samples=14 00:20:43.559 write: IOPS=9223, BW=36.0MiB/s (37.8MB/s)(432MiB/12001msec); 0 zone resets 00:20:43.559 slat (nsec): min=1628, max=213557, avg=7453.29, stdev=6952.67 00:20:43.559 clat (usec): min=193, max=5006.2k, avg=1841.46, stdev=65566.59 00:20:43.559 lat (usec): min=222, max=5006.2k, avg=1848.92, stdev=65566.65 00:20:43.559 clat percentiles (usec): 00:20:43.559 | 1.00th=[ 627], 5.00th=[ 734], 10.00th=[ 783], 00:20:43.559 | 20.00th=[ 840], 30.00th=[ 889], 40.00th=[ 930], 00:20:43.559 | 50.00th=[ 963], 60.00th=[ 996], 70.00th=[ 1045], 00:20:43.559 | 80.00th=[ 1123], 90.00th=[ 1221], 95.00th=[ 1303], 00:20:43.559 | 99.00th=[ 1418], 99.50th=[ 1467], 99.90th=[ 2024], 00:20:43.559 | 99.95th=[ 3654], 99.99th=[4999611] 00:20:43.559 bw ( KiB/s): min=31280, max=68744, per=100.00%, avg=59033.71, stdev=11905.08, samples=14 00:20:43.559 iops : min= 7820, max=17186, avg=14758.43, stdev=2976.27, samples=14 00:20:43.559 lat (usec) : 250=0.01%, 500=0.06%, 750=5.30%, 1000=48.85% 00:20:43.560 lat (msec) : 2=45.68%, 4=0.07%, 10=0.03%, >=2000=0.01% 00:20:43.560 cpu : usr=4.87%, sys=11.84%, ctx=23781, majf=0, minf=1 00:20:43.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% 00:20:43.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:43.560 issued rwts: total=110997,110690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.560 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:43.560 00:20:43.560 Run status group 0 (all jobs): 00:20:43.560 READ: bw=36.1MiB/s (37.9MB/s), 36.1MiB/s-36.1MiB/s (37.9MB/s-37.9MB/s), io=434MiB (455MB), run=12001-12001msec 00:20:43.560 WRITE: bw=36.0MiB/s (37.8MB/s), 36.0MiB/s-36.0MiB/s (37.8MB/s-37.8MB/s), io=432MiB (453MB), run=12001-12001msec 00:20:43.560 00:20:43.560 Disk stats (read/write): 00:20:43.560 sda: ios=109476/109106, merge=0/0, ticks=163951/194220, in_queue=358172, util=99.29% 00:20:43.560 09:02:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@87 -- # trap - SIGINT SIGTERM EXIT 00:20:43.560 09:02:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@89 -- # iscsicleanup 00:20:43.560 Cleaning up iSCSI connection 00:20:43.560 09:02:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:20:43.560 09:02:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:20:43.560 Logging out of session [sid: 21, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:20:43.560 Logout of [sid: 21, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:20:43.560 09:02:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:20:43.560 09:02:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@985 -- # rm -rf 00:20:43.560 09:02:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@91 -- # rpc_cmd -s /var/tmp/spdk1.sock spdk_kill_instance SIGTERM 00:20:43.560 09:02:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.560 09:02:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:44.492 09:02:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.492 09:02:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@92 -- # wait 77221 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@93 -- # iscsitestfini 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:20:46.390 00:20:46.390 real 0m22.284s 00:20:46.390 user 0m31.256s 00:20:46.390 sys 0m3.959s 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:46.390 ************************************ 00:20:46.390 END TEST iscsi_tgt_ip_migration 00:20:46.390 ************************************ 00:20:46.390 09:02:53 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@40 -- # run_test iscsi_tgt_trace_record /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:20:46.390 09:02:53 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:46.390 09:02:53 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:46.390 09:02:53 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:20:46.390 ************************************ 00:20:46.390 START TEST iscsi_tgt_trace_record 00:20:46.390 ************************************ 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:20:46.390 * Looking for test storage... 00:20:46.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@11 -- # iscsitestinit 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@13 -- # TRACE_TMP_FOLDER=./tmp-trace 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@14 -- # TRACE_RECORD_OUTPUT=./tmp-trace/record.trace 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@15 -- # TRACE_RECORD_NOTICE_LOG=./tmp-trace/record.notice 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@16 -- # TRACE_TOOL_LOG=./tmp-trace/trace.log 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@22 -- # '[' -z 10.0.0.1 ']' 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@27 -- # '[' -z 10.0.0.2 ']' 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@32 -- # NUM_TRACE_ENTRIES=4096 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@33 -- # MALLOC_BDEV_SIZE=64 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@34 -- # MALLOC_BLOCK_SIZE=4096 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@36 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@37 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@39 -- # timing_enter start_iscsi_tgt 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:20:46.390 start iscsi_tgt with trace enabled 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@41 -- # echo 'start iscsi_tgt with trace enabled' 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@43 -- # iscsi_pid=77563 00:20:46.390 Process pid: 77563 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@44 -- # echo 'Process pid: 77563' 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@46 -- # trap 'killprocess $iscsi_pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@48 -- # waitforlisten 77563 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@831 -- # '[' -z 77563 ']' 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:46.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@42 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xf --num-trace-entries 4096 --tpoint-group all 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:46.390 09:02:53 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:20:46.390 [2024-07-25 09:02:53.503330] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:46.390 [2024-07-25 09:02:53.503478] [ DPDK EAL parameters: iscsi --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77563 ] 00:20:46.647 [2024-07-25 09:02:53.672684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.904 [2024-07-25 09:02:53.948966] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask all specified. 00:20:46.904 [2024-07-25 09:02:53.949032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s iscsi -p 77563' to capture a snapshot of events at runtime. 00:20:46.904 [2024-07-25 09:02:53.949047] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.904 [2024-07-25 09:02:53.949056] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.904 [2024-07-25 09:02:53.949067] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/iscsi_trace.pid77563 for offline analysis/debug. 00:20:46.904 [2024-07-25 09:02:53.949242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.904 [2024-07-25 09:02:53.949374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.904 [2024-07-25 09:02:53.949463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.904 [2024-07-25 09:02:53.949498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@864 -- # return 0 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@50 -- # echo 'iscsi_tgt is listening. Running tests...' 00:20:48.280 iscsi_tgt is listening. Running tests... 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@52 -- # timing_exit start_iscsi_tgt 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@54 -- # mkdir -p ./tmp-trace 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@56 -- # record_pid=77598 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace_record -s iscsi -p 77563 -f ./tmp-trace/record.trace -q 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@57 -- # echo 'Trace record pid: 77598' 00:20:48.280 Trace record pid: 77598 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@59 -- # RPCS= 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@60 -- # RPCS+='iscsi_create_portal_group 1 10.0.0.1:3260\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@61 -- # RPCS+='iscsi_create_initiator_group 2 ANY 10.0.0.2/32\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@63 -- # echo 'Create bdevs and target nodes' 00:20:48.280 Create bdevs and target nodes 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@64 -- # CONNECTION_NUMBER=15 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # seq 0 15 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc0\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target0 Target0_alias Malloc0:0 1:2 256 -d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc1\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target1 Target1_alias Malloc1:0 1:2 256 -d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc2\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target2 Target2_alias Malloc2:0 1:2 256 -d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc3\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target3 Target3_alias Malloc3:0 1:2 256 -d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc4\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target4 Target4_alias Malloc4:0 1:2 256 -d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc5\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target5 Target5_alias Malloc5:0 1:2 256 -d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc6\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target6 Target6_alias Malloc6:0 1:2 256 -d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc7\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target7 Target7_alias Malloc7:0 1:2 256 -d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc8\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target8 Target8_alias Malloc8:0 1:2 256 -d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc9\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target9 Target9_alias Malloc9:0 1:2 256 -d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc10\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target10 Target10_alias Malloc10:0 1:2 256 -d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc11\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target11 Target11_alias Malloc11:0 1:2 256 -d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc12\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target12 Target12_alias Malloc12:0 1:2 256 -d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc13\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target13 Target13_alias Malloc13:0 1:2 256 -d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc14\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target14 Target14_alias Malloc14:0 1:2 256 -d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc15\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target15 Target15_alias Malloc15:0 1:2 256 -d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # echo -e iscsi_create_portal_group 1 '10.0.0.1:3260\niscsi_create_initiator_group' 2 ANY '10.0.0.2/32\nbdev_malloc_create' 64 4096 -b 'Malloc0\niscsi_create_target_node' Target0 Target0_alias Malloc0:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc1\niscsi_create_target_node' Target1 Target1_alias Malloc1:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc2\niscsi_create_target_node' Target2 Target2_alias Malloc2:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc3\niscsi_create_target_node' Target3 Target3_alias Malloc3:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc4\niscsi_create_target_node' Target4 Target4_alias Malloc4:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc5\niscsi_create_target_node' Target5 Target5_alias Malloc5:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc6\niscsi_create_target_node' Target6 Target6_alias Malloc6:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc7\niscsi_create_target_node' Target7 Target7_alias Malloc7:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc8\niscsi_create_target_node' Target8 Target8_alias Malloc8:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc9\niscsi_create_target_node' Target9 Target9_alias Malloc9:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc10\niscsi_create_target_node' Target10 Target10_alias Malloc10:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc11\niscsi_create_target_node' Target11 Target11_alias Malloc11:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc12\niscsi_create_target_node' Target12 Target12_alias Malloc12:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc13\niscsi_create_target_node' Target13 Target13_alias Malloc13:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc14\niscsi_create_target_node' Target14 Target14_alias Malloc14:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc15\niscsi_create_target_node' Target15 Target15_alias Malloc15:0 1:2 256 '-d\n' 00:20:48.280 09:02:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:50.182 Malloc0 00:20:50.182 Malloc1 00:20:50.182 Malloc2 00:20:50.182 Malloc3 00:20:50.182 Malloc4 00:20:50.182 Malloc5 00:20:50.182 Malloc6 00:20:50.182 Malloc7 00:20:50.182 Malloc8 00:20:50.182 Malloc9 00:20:50.182 Malloc10 00:20:50.182 Malloc11 00:20:50.182 Malloc12 00:20:50.182 Malloc13 00:20:50.182 Malloc14 00:20:50.182 Malloc15 00:20:50.182 09:02:57 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@71 -- # sleep 1 00:20:51.117 09:02:58 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@73 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:20:51.117 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target0 00:20:51.117 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:20:51.117 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:20:51.117 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:20:51.117 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:20:51.117 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:20:51.117 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:20:51.117 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:20:51.117 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:20:51.117 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:20:51.117 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:20:51.117 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:20:51.117 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:20:51.117 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:20:51.117 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:20:51.117 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:20:51.117 09:02:58 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@74 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:20:51.117 [2024-07-25 09:02:58.172628] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:51.117 [2024-07-25 09:02:58.182334] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:51.117 [2024-07-25 09:02:58.197146] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:51.117 [2024-07-25 09:02:58.212179] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:51.376 [2024-07-25 09:02:58.257618] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:51.376 [2024-07-25 09:02:58.264397] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:51.376 [2024-07-25 09:02:58.300499] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:51.376 [2024-07-25 09:02:58.333991] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:51.376 [2024-07-25 09:02:58.348109] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:51.376 [2024-07-25 09:02:58.379195] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:51.376 [2024-07-25 09:02:58.394613] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:51.376 [2024-07-25 09:02:58.427579] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:51.376 [2024-07-25 09:02:58.456782] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:51.376 [2024-07-25 09:02:58.475975] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:51.634 [2024-07-25 09:02:58.513405] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:51.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:20:51.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:20:51.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:20:51.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:20:51.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:20:51.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:20:51.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:20:51.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:20:51.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:20:51.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:20:51.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:20:51.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:20:51.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:20:51.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:20:51.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:20:51.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:20:51.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:20:51.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:20:51.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:20:51.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:20:51.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:20:51.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:20:51.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:20:51.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:20:51.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:20:51.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:20:51.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:20:51.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:20:51.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:20:51.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:20:51.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:20:51.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:20:51.634 09:02:58 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@75 -- # waitforiscsidevices 16 00:20:51.634 09:02:58 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@116 -- # local num=16 00:20:51.634 09:02:58 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:51.634 09:02:58 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:51.634 [2024-07-25 09:02:58.525029] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:51.634 09:02:58 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:51.634 09:02:58 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:51.634 09:02:58 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # n=16 00:20:51.634 09:02:58 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@120 -- # '[' 16 -ne 16 ']' 00:20:51.634 09:02:58 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@123 -- # return 0 00:20:51.634 09:02:58 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@77 -- # trap 'iscsicleanup; killprocess $iscsi_pid; killprocess $record_pid; delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:51.634 09:02:58 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@79 -- # echo 'Running FIO' 00:20:51.634 Running FIO 00:20:51.634 09:02:58 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 00:20:51.634 [global] 00:20:51.634 thread=1 00:20:51.634 invalidate=1 00:20:51.634 rw=randrw 00:20:51.634 time_based=1 00:20:51.634 runtime=1 00:20:51.634 ioengine=libaio 00:20:51.634 direct=1 00:20:51.634 bs=131072 00:20:51.634 iodepth=32 00:20:51.634 norandommap=1 00:20:51.634 numjobs=1 00:20:51.634 00:20:51.634 [job0] 00:20:51.634 filename=/dev/sda 00:20:51.634 [job1] 00:20:51.634 filename=/dev/sdb 00:20:51.634 [job2] 00:20:51.634 filename=/dev/sdc 00:20:51.634 [job3] 00:20:51.634 filename=/dev/sdd 00:20:51.634 [job4] 00:20:51.634 filename=/dev/sde 00:20:51.634 [job5] 00:20:51.634 filename=/dev/sdf 00:20:51.634 [job6] 00:20:51.634 filename=/dev/sdg 00:20:51.634 [job7] 00:20:51.634 filename=/dev/sdh 00:20:51.634 [job8] 00:20:51.634 filename=/dev/sdi 00:20:51.634 [job9] 00:20:51.634 filename=/dev/sdj 00:20:51.634 [job10] 00:20:51.634 filename=/dev/sdk 00:20:51.634 [job11] 00:20:51.634 filename=/dev/sdl 00:20:51.634 [job12] 00:20:51.634 filename=/dev/sdm 00:20:51.634 [job13] 00:20:51.634 filename=/dev/sdn 00:20:51.634 [job14] 00:20:51.634 filename=/dev/sdo 00:20:51.634 [job15] 00:20:51.634 filename=/dev/sdp 00:20:51.891 queue_depth set to 113 (sda) 00:20:51.891 queue_depth set to 113 (sdb) 00:20:51.891 queue_depth set to 113 (sdc) 00:20:51.891 queue_depth set to 113 (sdd) 00:20:51.891 queue_depth set to 113 (sde) 00:20:51.891 queue_depth set to 113 (sdf) 00:20:51.891 queue_depth set to 113 (sdg) 00:20:51.891 queue_depth set to 113 (sdh) 00:20:51.891 queue_depth set to 113 (sdi) 00:20:51.891 queue_depth set to 113 (sdj) 00:20:52.149 queue_depth set to 113 (sdk) 00:20:52.149 queue_depth set to 113 (sdl) 00:20:52.149 queue_depth set to 113 (sdm) 00:20:52.149 queue_depth set to 113 (sdn) 00:20:52.149 queue_depth set to 113 (sdo) 00:20:52.149 queue_depth set to 113 (sdp) 00:20:52.149 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:52.149 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:52.149 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:52.149 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:52.149 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:52.149 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:52.149 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:52.149 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:52.149 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:52.149 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:52.149 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:52.149 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:52.149 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:52.149 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:52.149 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:52.149 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:52.149 fio-3.35 00:20:52.149 Starting 16 threads 00:20:52.149 [2024-07-25 09:02:59.238777] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:52.149 [2024-07-25 09:02:59.241793] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:52.149 [2024-07-25 09:02:59.244176] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:52.149 [2024-07-25 09:02:59.246763] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:52.149 [2024-07-25 09:02:59.248995] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:52.149 [2024-07-25 09:02:59.251220] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:52.149 [2024-07-25 09:02:59.253570] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:52.149 [2024-07-25 09:02:59.255966] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:52.149 [2024-07-25 09:02:59.258933] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:52.149 [2024-07-25 09:02:59.261185] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:52.149 [2024-07-25 09:02:59.264096] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:52.149 [2024-07-25 09:02:59.266394] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:52.407 [2024-07-25 09:02:59.268894] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:52.407 [2024-07-25 09:02:59.271246] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:52.407 [2024-07-25 09:02:59.273579] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:52.407 [2024-07-25 09:02:59.275895] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 [2024-07-25 09:03:00.622770] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 [2024-07-25 09:03:00.630421] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 [2024-07-25 09:03:00.633066] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 [2024-07-25 09:03:00.635080] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 [2024-07-25 09:03:00.636943] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 [2024-07-25 09:03:00.638869] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 [2024-07-25 09:03:00.640907] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 [2024-07-25 09:03:00.643262] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 [2024-07-25 09:03:00.644888] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 [2024-07-25 09:03:00.646739] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 [2024-07-25 09:03:00.648699] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 [2024-07-25 09:03:00.650262] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 [2024-07-25 09:03:00.651845] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 [2024-07-25 09:03:00.653407] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 [2024-07-25 09:03:00.654904] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 [2024-07-25 09:03:00.656407] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.783 00:20:53.783 job0: (groupid=0, jobs=1): err= 0: pid=77986: Thu Jul 25 09:03:00 2024 00:20:53.783 read: IOPS=398, BW=49.9MiB/s (52.3MB/s)(52.2MiB/1048msec) 00:20:53.783 slat (usec): min=6, max=1027, avg=32.50, stdev=65.38 00:20:53.783 clat (usec): min=1038, max=56691, avg=10317.74, stdev=4687.39 00:20:53.783 lat (usec): min=1084, max=56700, avg=10350.24, stdev=4683.33 00:20:53.783 clat percentiles (usec): 00:20:53.783 | 1.00th=[ 3654], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9110], 00:20:53.783 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:20:53.783 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11207], 95.00th=[13042], 00:20:53.783 | 99.00th=[18744], 99.50th=[53740], 99.90th=[56886], 99.95th=[56886], 00:20:53.783 | 99.99th=[56886] 00:20:53.783 bw ( KiB/s): min=51456, max=54419, per=6.64%, avg=52937.50, stdev=2095.16, samples=2 00:20:53.783 iops : min= 402, max= 425, avg=413.50, stdev=16.26, samples=2 00:20:53.783 write: IOPS=422, BW=52.8MiB/s (55.4MB/s)(55.4MiB/1048msec); 0 zone resets 00:20:53.783 slat (usec): min=11, max=2801, avg=51.83, stdev=152.43 00:20:53.783 clat (msec): min=3, max=102, avg=65.72, stdev=12.44 00:20:53.783 lat (msec): min=3, max=102, avg=65.77, stdev=12.45 00:20:53.783 clat percentiles (msec): 00:20:53.783 | 1.00th=[ 10], 5.00th=[ 50], 10.00th=[ 60], 20.00th=[ 64], 00:20:53.783 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:20:53.783 | 70.00th=[ 70], 80.00th=[ 71], 90.00th=[ 73], 95.00th=[ 77], 00:20:53.783 | 99.00th=[ 93], 99.50th=[ 96], 99.90th=[ 103], 99.95th=[ 103], 00:20:53.783 | 99.99th=[ 103] 00:20:53.783 bw ( KiB/s): min=52630, max=53760, per=6.55%, avg=53195.00, stdev=799.03, samples=2 00:20:53.783 iops : min= 411, max= 420, avg=415.50, stdev= 6.36, samples=2 00:20:53.783 lat (msec) : 2=0.12%, 4=0.81%, 10=29.50%, 20=19.05%, 50=1.28% 00:20:53.783 lat (msec) : 100=49.13%, 250=0.12% 00:20:53.783 cpu : usr=0.86%, sys=2.10%, ctx=764, majf=0, minf=1 00:20:53.783 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=96.4%, >=64=0.0% 00:20:53.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.783 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:53.783 issued rwts: total=418,443,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.783 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:53.783 job1: (groupid=0, jobs=1): err= 0: pid=77990: Thu Jul 25 09:03:00 2024 00:20:53.783 read: IOPS=352, BW=44.1MiB/s (46.2MB/s)(46.9MiB/1063msec) 00:20:53.783 slat (usec): min=7, max=1286, avg=35.01, stdev=92.13 00:20:53.783 clat (usec): min=929, max=69306, avg=10528.87, stdev=5610.85 00:20:53.783 lat (usec): min=999, max=69321, avg=10563.88, stdev=5608.18 00:20:53.784 clat percentiles (usec): 00:20:53.784 | 1.00th=[ 1909], 5.00th=[ 7570], 10.00th=[ 8455], 20.00th=[ 8979], 00:20:53.784 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:20:53.784 | 70.00th=[10552], 80.00th=[11207], 90.00th=[13042], 95.00th=[14484], 00:20:53.784 | 99.00th=[17695], 99.50th=[67634], 99.90th=[69731], 99.95th=[69731], 00:20:53.784 | 99.99th=[69731] 00:20:53.784 bw ( KiB/s): min=45402, max=49920, per=5.98%, avg=47661.00, stdev=3194.71, samples=2 00:20:53.784 iops : min= 354, max= 390, avg=372.00, stdev=25.46, samples=2 00:20:53.784 write: IOPS=405, BW=50.7MiB/s (53.1MB/s)(53.9MiB/1063msec); 0 zone resets 00:20:53.784 slat (usec): min=11, max=1234, avg=42.58, stdev=87.47 00:20:53.784 clat (msec): min=3, max=128, avg=69.49, stdev=17.35 00:20:53.784 lat (msec): min=3, max=128, avg=69.53, stdev=17.35 00:20:53.784 clat percentiles (msec): 00:20:53.784 | 1.00th=[ 8], 5.00th=[ 38], 10.00th=[ 61], 20.00th=[ 63], 00:20:53.784 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 69], 60.00th=[ 71], 00:20:53.784 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 87], 95.00th=[ 95], 00:20:53.784 | 99.00th=[ 115], 99.50th=[ 125], 99.90th=[ 129], 99.95th=[ 129], 00:20:53.784 | 99.99th=[ 129] 00:20:53.784 bw ( KiB/s): min=47711, max=55040, per=6.33%, avg=51375.50, stdev=5182.39, samples=2 00:20:53.784 iops : min= 372, max= 430, avg=401.00, stdev=41.01, samples=2 00:20:53.784 lat (usec) : 1000=0.12% 00:20:53.784 lat (msec) : 2=0.37%, 4=0.25%, 10=27.42%, 20=19.48%, 50=2.23% 00:20:53.784 lat (msec) : 100=48.64%, 250=1.49% 00:20:53.784 cpu : usr=0.28%, sys=2.17%, ctx=749, majf=0, minf=1 00:20:53.784 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=96.2%, >=64=0.0% 00:20:53.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.784 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:53.784 issued rwts: total=375,431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.784 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:53.784 job2: (groupid=0, jobs=1): err= 0: pid=77994: Thu Jul 25 09:03:00 2024 00:20:53.784 read: IOPS=431, BW=53.9MiB/s (56.5MB/s)(56.8MiB/1053msec) 00:20:53.784 slat (usec): min=7, max=940, avg=37.71, stdev=69.75 00:20:53.784 clat (usec): min=2017, max=59512, avg=10156.27, stdev=4117.48 00:20:53.784 lat (usec): min=2033, max=59521, avg=10193.98, stdev=4117.90 00:20:53.784 clat percentiles (usec): 00:20:53.784 | 1.00th=[ 2474], 5.00th=[ 7439], 10.00th=[ 8717], 20.00th=[ 9110], 00:20:53.784 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10159], 00:20:53.784 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11469], 95.00th=[12125], 00:20:53.784 | 99.00th=[14091], 99.50th=[54789], 99.90th=[59507], 99.95th=[59507], 00:20:53.784 | 99.99th=[59507] 00:20:53.784 bw ( KiB/s): min=57344, max=57880, per=7.22%, avg=57612.00, stdev=379.01, samples=2 00:20:53.784 iops : min= 448, max= 452, avg=450.00, stdev= 2.83, samples=2 00:20:53.784 write: IOPS=415, BW=51.9MiB/s (54.4MB/s)(54.6MiB/1053msec); 0 zone resets 00:20:53.784 slat (usec): min=12, max=3547, avg=48.69, stdev=177.42 00:20:53.784 clat (msec): min=7, max=107, avg=66.27, stdev=12.00 00:20:53.784 lat (msec): min=7, max=108, avg=66.32, stdev=12.01 00:20:53.784 clat percentiles (msec): 00:20:53.784 | 1.00th=[ 13], 5.00th=[ 52], 10.00th=[ 59], 20.00th=[ 63], 00:20:53.784 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:20:53.784 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 73], 95.00th=[ 77], 00:20:53.784 | 99.00th=[ 104], 99.50th=[ 107], 99.90th=[ 108], 99.95th=[ 108], 00:20:53.784 | 99.99th=[ 108] 00:20:53.784 bw ( KiB/s): min=52015, max=52480, per=6.43%, avg=52247.50, stdev=328.80, samples=2 00:20:53.784 iops : min= 406, max= 410, avg=408.00, stdev= 2.83, samples=2 00:20:53.784 lat (msec) : 4=0.56%, 10=28.62%, 20=22.45%, 50=1.23%, 100=46.35% 00:20:53.784 lat (msec) : 250=0.79% 00:20:53.784 cpu : usr=0.57%, sys=2.38%, ctx=800, majf=0, minf=1 00:20:53.784 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=96.5%, >=64=0.0% 00:20:53.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.784 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:53.784 issued rwts: total=454,437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.784 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:53.784 job3: (groupid=0, jobs=1): err= 0: pid=78013: Thu Jul 25 09:03:00 2024 00:20:53.784 read: IOPS=355, BW=44.5MiB/s (46.6MB/s)(47.4MiB/1065msec) 00:20:53.784 slat (usec): min=6, max=1094, avg=33.42, stdev=75.90 00:20:53.784 clat (usec): min=1923, max=73207, avg=11830.85, stdev=7641.93 00:20:53.784 lat (usec): min=2299, max=73223, avg=11864.27, stdev=7636.81 00:20:53.784 clat percentiles (usec): 00:20:53.784 | 1.00th=[ 3785], 5.00th=[ 5735], 10.00th=[ 9241], 20.00th=[10159], 00:20:53.784 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[11600], 00:20:53.784 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12780], 95.00th=[13829], 00:20:53.784 | 99.00th=[70779], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:20:53.784 | 99.99th=[72877] 00:20:53.784 bw ( KiB/s): min=43008, max=52480, per=5.99%, avg=47744.00, stdev=6697.72, samples=2 00:20:53.784 iops : min= 336, max= 410, avg=373.00, stdev=52.33, samples=2 00:20:53.784 write: IOPS=368, BW=46.0MiB/s (48.2MB/s)(49.0MiB/1065msec); 0 zone resets 00:20:53.784 slat (usec): min=15, max=1505, avg=53.36, stdev=118.07 00:20:53.784 clat (msec): min=2, max=133, avg=75.06, stdev=17.59 00:20:53.784 lat (msec): min=2, max=133, avg=75.12, stdev=17.59 00:20:53.784 clat percentiles (msec): 00:20:53.784 | 1.00th=[ 5], 5.00th=[ 30], 10.00th=[ 69], 20.00th=[ 73], 00:20:53.784 | 30.00th=[ 75], 40.00th=[ 77], 50.00th=[ 78], 60.00th=[ 80], 00:20:53.784 | 70.00th=[ 81], 80.00th=[ 83], 90.00th=[ 86], 95.00th=[ 89], 00:20:53.784 | 99.00th=[ 121], 99.50th=[ 131], 99.90th=[ 134], 99.95th=[ 134], 00:20:53.784 | 99.99th=[ 134] 00:20:53.784 bw ( KiB/s): min=46080, max=47872, per=5.79%, avg=46976.00, stdev=1267.14, samples=2 00:20:53.784 iops : min= 360, max= 374, avg=367.00, stdev= 9.90, samples=2 00:20:53.784 lat (msec) : 2=0.13%, 4=1.17%, 10=7.13%, 20=41.63%, 50=1.95% 00:20:53.784 lat (msec) : 100=46.17%, 250=1.82% 00:20:53.784 cpu : usr=0.85%, sys=1.79%, ctx=746, majf=0, minf=1 00:20:53.784 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=96.0%, >=64=0.0% 00:20:53.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.784 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:53.784 issued rwts: total=379,392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.784 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:53.784 job4: (groupid=0, jobs=1): err= 0: pid=78021: Thu Jul 25 09:03:00 2024 00:20:53.784 read: IOPS=407, BW=51.0MiB/s (53.5MB/s)(52.9MiB/1037msec) 00:20:53.784 slat (usec): min=6, max=1878, avg=37.93, stdev=112.07 00:20:53.784 clat (usec): min=3843, max=43029, avg=10007.60, stdev=2789.13 00:20:53.784 lat (usec): min=3852, max=43056, avg=10045.53, stdev=2779.49 00:20:53.784 clat percentiles (usec): 00:20:53.784 | 1.00th=[ 6194], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:20:53.784 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:20:53.784 | 70.00th=[10159], 80.00th=[10421], 90.00th=[11076], 95.00th=[11994], 00:20:53.784 | 99.00th=[15139], 99.50th=[36963], 99.90th=[43254], 99.95th=[43254], 00:20:53.784 | 99.99th=[43254] 00:20:53.784 bw ( KiB/s): min=49053, max=58368, per=6.73%, avg=53710.50, stdev=6586.70, samples=2 00:20:53.784 iops : min= 383, max= 456, avg=419.50, stdev=51.62, samples=2 00:20:53.784 write: IOPS=423, BW=52.9MiB/s (55.5MB/s)(54.9MiB/1037msec); 0 zone resets 00:20:53.784 slat (usec): min=9, max=1732, avg=52.93, stdev=128.14 00:20:53.784 clat (msec): min=11, max=102, avg=65.73, stdev= 9.14 00:20:53.784 lat (msec): min=11, max=102, avg=65.78, stdev= 9.14 00:20:53.784 clat percentiles (msec): 00:20:53.784 | 1.00th=[ 24], 5.00th=[ 51], 10.00th=[ 60], 20.00th=[ 64], 00:20:53.784 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 67], 60.00th=[ 68], 00:20:53.784 | 70.00th=[ 69], 80.00th=[ 71], 90.00th=[ 72], 95.00th=[ 74], 00:20:53.784 | 99.00th=[ 93], 99.50th=[ 101], 99.90th=[ 103], 99.95th=[ 103], 00:20:53.784 | 99.99th=[ 103] 00:20:53.784 bw ( KiB/s): min=51456, max=53397, per=6.46%, avg=52426.50, stdev=1372.49, samples=2 00:20:53.784 iops : min= 402, max= 417, avg=409.50, stdev=10.61, samples=2 00:20:53.784 lat (msec) : 4=0.23%, 10=30.74%, 20=18.10%, 50=2.44%, 100=48.26% 00:20:53.784 lat (msec) : 250=0.23% 00:20:53.784 cpu : usr=0.39%, sys=2.51%, ctx=727, majf=0, minf=1 00:20:53.784 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=96.4%, >=64=0.0% 00:20:53.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.784 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:53.784 issued rwts: total=423,439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.784 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:53.784 job5: (groupid=0, jobs=1): err= 0: pid=78023: Thu Jul 25 09:03:00 2024 00:20:53.784 read: IOPS=419, BW=52.5MiB/s (55.0MB/s)(55.6MiB/1060msec) 00:20:53.784 slat (usec): min=6, max=2454, avg=50.76, stdev=164.32 00:20:53.784 clat (usec): min=3400, max=65515, avg=10110.64, stdev=3366.05 00:20:53.784 lat (usec): min=3411, max=65527, avg=10161.40, stdev=3355.88 00:20:53.784 clat percentiles (usec): 00:20:53.784 | 1.00th=[ 3425], 5.00th=[ 5538], 10.00th=[ 8225], 20.00th=[ 8848], 00:20:53.784 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10159], 00:20:53.784 | 70.00th=[10814], 80.00th=[11469], 90.00th=[12518], 95.00th=[13435], 00:20:53.784 | 99.00th=[16450], 99.50th=[16581], 99.90th=[65274], 99.95th=[65274], 00:20:53.784 | 99.99th=[65274] 00:20:53.784 bw ( KiB/s): min=55808, max=57740, per=7.12%, avg=56774.00, stdev=1366.13, samples=2 00:20:53.784 iops : min= 436, max= 451, avg=443.50, stdev=10.61, samples=2 00:20:53.784 write: IOPS=408, BW=51.1MiB/s (53.5MB/s)(54.1MiB/1060msec); 0 zone resets 00:20:53.784 slat (usec): min=6, max=928, avg=43.24, stdev=73.91 00:20:53.785 clat (msec): min=3, max=117, avg=67.62, stdev=15.32 00:20:53.785 lat (msec): min=3, max=117, avg=67.66, stdev=15.31 00:20:53.785 clat percentiles (msec): 00:20:53.785 | 1.00th=[ 8], 5.00th=[ 37], 10.00th=[ 61], 20.00th=[ 63], 00:20:53.785 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 68], 60.00th=[ 69], 00:20:53.785 | 70.00th=[ 73], 80.00th=[ 78], 90.00th=[ 83], 95.00th=[ 90], 00:20:53.785 | 99.00th=[ 111], 99.50th=[ 113], 99.90th=[ 118], 99.95th=[ 118], 00:20:53.785 | 99.99th=[ 118] 00:20:53.785 bw ( KiB/s): min=47520, max=55296, per=6.33%, avg=51408.00, stdev=5498.46, samples=2 00:20:53.785 iops : min= 371, max= 432, avg=401.50, stdev=43.13, samples=2 00:20:53.785 lat (msec) : 4=0.91%, 10=27.90%, 20=23.12%, 50=1.48%, 100=45.67% 00:20:53.785 lat (msec) : 250=0.91% 00:20:53.785 cpu : usr=0.66%, sys=2.08%, ctx=810, majf=0, minf=1 00:20:53.785 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=96.5%, >=64=0.0% 00:20:53.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.785 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:53.785 issued rwts: total=445,433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.785 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:53.785 job6: (groupid=0, jobs=1): err= 0: pid=78054: Thu Jul 25 09:03:00 2024 00:20:53.785 read: IOPS=407, BW=51.0MiB/s (53.4MB/s)(53.9MiB/1057msec) 00:20:53.785 slat (usec): min=7, max=966, avg=36.35, stdev=75.54 00:20:53.785 clat (usec): min=725, max=64133, avg=10455.69, stdev=4559.10 00:20:53.785 lat (usec): min=744, max=64269, avg=10492.04, stdev=4563.59 00:20:53.785 clat percentiles (usec): 00:20:53.785 | 1.00th=[ 4146], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9241], 00:20:53.785 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:20:53.785 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11469], 95.00th=[12649], 00:20:53.785 | 99.00th=[19530], 99.50th=[57410], 99.90th=[64226], 99.95th=[64226], 00:20:53.785 | 99.99th=[64226] 00:20:53.785 bw ( KiB/s): min=52736, max=56832, per=6.87%, avg=54784.00, stdev=2896.31, samples=2 00:20:53.785 iops : min= 412, max= 444, avg=428.00, stdev=22.63, samples=2 00:20:53.785 write: IOPS=416, BW=52.0MiB/s (54.6MB/s)(55.0MiB/1057msec); 0 zone resets 00:20:53.785 slat (usec): min=11, max=6032, avg=65.05, stdev=309.05 00:20:53.785 clat (usec): min=1887, max=125601, avg=65861.29, stdev=15190.06 00:20:53.785 lat (msec): min=2, max=126, avg=65.93, stdev=15.13 00:20:53.785 clat percentiles (msec): 00:20:53.785 | 1.00th=[ 5], 5.00th=[ 33], 10.00th=[ 58], 20.00th=[ 62], 00:20:53.785 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:20:53.785 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 75], 95.00th=[ 83], 00:20:53.785 | 99.00th=[ 115], 99.50th=[ 120], 99.90th=[ 126], 99.95th=[ 126], 00:20:53.785 | 99.99th=[ 126] 00:20:53.785 bw ( KiB/s): min=52224, max=52992, per=6.48%, avg=52608.00, stdev=543.06, samples=2 00:20:53.785 iops : min= 408, max= 414, avg=411.00, stdev= 4.24, samples=2 00:20:53.785 lat (usec) : 750=0.23% 00:20:53.785 lat (msec) : 2=0.23%, 4=0.23%, 10=26.64%, 20=23.54%, 50=2.07% 00:20:53.785 lat (msec) : 100=46.15%, 250=0.92% 00:20:53.785 cpu : usr=1.04%, sys=1.80%, ctx=751, majf=0, minf=1 00:20:53.785 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=96.4%, >=64=0.0% 00:20:53.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.785 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:53.785 issued rwts: total=431,440,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.785 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:53.785 job7: (groupid=0, jobs=1): err= 0: pid=78097: Thu Jul 25 09:03:00 2024 00:20:53.785 read: IOPS=346, BW=43.4MiB/s (45.5MB/s)(44.9MiB/1035msec) 00:20:53.785 slat (usec): min=7, max=1119, avg=32.40, stdev=76.35 00:20:53.785 clat (usec): min=2229, max=18943, avg=11173.63, stdev=1537.88 00:20:53.785 lat (usec): min=2238, max=18958, avg=11206.03, stdev=1533.59 00:20:53.785 clat percentiles (usec): 00:20:53.785 | 1.00th=[ 5145], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10290], 00:20:53.785 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:20:53.785 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12518], 95.00th=[13304], 00:20:53.785 | 99.00th=[15664], 99.50th=[16319], 99.90th=[19006], 99.95th=[19006], 00:20:53.785 | 99.99th=[19006] 00:20:53.785 bw ( KiB/s): min=42068, max=49408, per=5.73%, avg=45738.00, stdev=5190.16, samples=2 00:20:53.785 iops : min= 328, max= 386, avg=357.00, stdev=41.01, samples=2 00:20:53.785 write: IOPS=377, BW=47.2MiB/s (49.5MB/s)(48.9MiB/1035msec); 0 zone resets 00:20:53.785 slat (usec): min=9, max=739, avg=46.28, stdev=59.91 00:20:53.785 clat (msec): min=10, max=100, avg=74.21, stdev=12.23 00:20:53.785 lat (msec): min=10, max=100, avg=74.26, stdev=12.24 00:20:53.785 clat percentiles (msec): 00:20:53.785 | 1.00th=[ 21], 5.00th=[ 45], 10.00th=[ 66], 20.00th=[ 72], 00:20:53.785 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 78], 60.00th=[ 79], 00:20:53.785 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 83], 95.00th=[ 85], 00:20:53.785 | 99.00th=[ 95], 99.50th=[ 99], 99.90th=[ 102], 99.95th=[ 102], 00:20:53.785 | 99.99th=[ 102] 00:20:53.785 bw ( KiB/s): min=44032, max=47454, per=5.63%, avg=45743.00, stdev=2419.72, samples=2 00:20:53.785 iops : min= 344, max= 370, avg=357.00, stdev=18.38, samples=2 00:20:53.785 lat (msec) : 4=0.27%, 10=6.67%, 20=41.33%, 50=2.80%, 100=48.80% 00:20:53.785 lat (msec) : 250=0.13% 00:20:53.785 cpu : usr=0.68%, sys=2.03%, ctx=694, majf=0, minf=1 00:20:53.785 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=95.9%, >=64=0.0% 00:20:53.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.785 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:53.785 issued rwts: total=359,391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.785 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:53.785 job8: (groupid=0, jobs=1): err= 0: pid=78098: Thu Jul 25 09:03:00 2024 00:20:53.785 read: IOPS=371, BW=46.4MiB/s (48.7MB/s)(48.5MiB/1045msec) 00:20:53.785 slat (usec): min=6, max=1328, avg=30.39, stdev=72.77 00:20:53.785 clat (usec): min=3448, max=52766, avg=10412.58, stdev=4667.16 00:20:53.785 lat (usec): min=3464, max=52783, avg=10442.97, stdev=4665.88 00:20:53.785 clat percentiles (usec): 00:20:53.785 | 1.00th=[ 5932], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:20:53.785 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:20:53.785 | 70.00th=[10159], 80.00th=[10421], 90.00th=[11469], 95.00th=[12649], 00:20:53.785 | 99.00th=[47449], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:20:53.785 | 99.99th=[52691] 00:20:53.785 bw ( KiB/s): min=46848, max=51097, per=6.14%, avg=48972.50, stdev=3004.50, samples=2 00:20:53.785 iops : min= 366, max= 399, avg=382.50, stdev=23.33, samples=2 00:20:53.785 write: IOPS=414, BW=51.8MiB/s (54.3MB/s)(54.1MiB/1045msec); 0 zone resets 00:20:53.785 slat (usec): min=11, max=4603, avg=50.09, stdev=226.24 00:20:53.785 clat (msec): min=13, max=111, avg=67.33, stdev= 9.37 00:20:53.785 lat (msec): min=15, max=112, avg=67.38, stdev= 9.32 00:20:53.785 clat percentiles (msec): 00:20:53.785 | 1.00th=[ 28], 5.00th=[ 58], 10.00th=[ 63], 20.00th=[ 64], 00:20:53.785 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:20:53.785 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 74], 95.00th=[ 77], 00:20:53.785 | 99.00th=[ 103], 99.50th=[ 110], 99.90th=[ 112], 99.95th=[ 112], 00:20:53.785 | 99.99th=[ 112] 00:20:53.785 bw ( KiB/s): min=50586, max=53248, per=6.39%, avg=51917.00, stdev=1882.32, samples=2 00:20:53.785 iops : min= 395, max= 416, avg=405.50, stdev=14.85, samples=2 00:20:53.785 lat (msec) : 4=0.24%, 10=28.99%, 20=17.78%, 50=1.71%, 100=50.67% 00:20:53.785 lat (msec) : 250=0.61% 00:20:53.785 cpu : usr=0.00%, sys=2.59%, ctx=758, majf=0, minf=1 00:20:53.785 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=96.2%, >=64=0.0% 00:20:53.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.785 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:53.785 issued rwts: total=388,433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.785 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:53.785 job9: (groupid=0, jobs=1): err= 0: pid=78099: Thu Jul 25 09:03:00 2024 00:20:53.785 read: IOPS=441, BW=55.2MiB/s (57.8MB/s)(57.2MiB/1038msec) 00:20:53.785 slat (usec): min=7, max=1253, avg=40.65, stdev=110.43 00:20:53.785 clat (usec): min=3121, max=42603, avg=10473.40, stdev=3326.36 00:20:53.785 lat (usec): min=3131, max=42642, avg=10514.05, stdev=3323.97 00:20:53.785 clat percentiles (usec): 00:20:53.785 | 1.00th=[ 5145], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 8979], 00:20:53.785 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10159], 00:20:53.785 | 70.00th=[10814], 80.00th=[11600], 90.00th=[12780], 95.00th=[13566], 00:20:53.785 | 99.00th=[19530], 99.50th=[38536], 99.90th=[42730], 99.95th=[42730], 00:20:53.785 | 99.99th=[42730] 00:20:53.785 bw ( KiB/s): min=56832, max=59392, per=7.29%, avg=58112.00, stdev=1810.19, samples=2 00:20:53.785 iops : min= 444, max= 464, avg=454.00, stdev=14.14, samples=2 00:20:53.785 write: IOPS=407, BW=50.9MiB/s (53.4MB/s)(52.9MiB/1038msec); 0 zone resets 00:20:53.785 slat (usec): min=11, max=3540, avg=53.54, stdev=189.17 00:20:53.785 clat (msec): min=11, max=101, avg=66.95, stdev=12.29 00:20:53.785 lat (msec): min=11, max=101, avg=67.00, stdev=12.29 00:20:53.785 clat percentiles (msec): 00:20:53.785 | 1.00th=[ 26], 5.00th=[ 50], 10.00th=[ 56], 20.00th=[ 61], 00:20:53.785 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 66], 60.00th=[ 68], 00:20:53.785 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 83], 95.00th=[ 88], 00:20:53.785 | 99.00th=[ 101], 99.50th=[ 102], 99.90th=[ 102], 99.95th=[ 102], 00:20:53.785 | 99.99th=[ 102] 00:20:53.786 bw ( KiB/s): min=45312, max=55808, per=6.23%, avg=50560.00, stdev=7421.79, samples=2 00:20:53.786 iops : min= 354, max= 436, avg=395.00, stdev=57.98, samples=2 00:20:53.786 lat (msec) : 4=0.23%, 10=29.74%, 20=21.91%, 50=2.61%, 100=45.06% 00:20:53.786 lat (msec) : 250=0.45% 00:20:53.786 cpu : usr=0.96%, sys=2.03%, ctx=753, majf=0, minf=1 00:20:53.786 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=96.5%, >=64=0.0% 00:20:53.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.786 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:53.786 issued rwts: total=458,423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.786 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:53.786 job10: (groupid=0, jobs=1): err= 0: pid=78100: Thu Jul 25 09:03:00 2024 00:20:53.786 read: IOPS=439, BW=55.0MiB/s (57.6MB/s)(57.0MiB/1037msec) 00:20:53.786 slat (usec): min=6, max=2278, avg=39.91, stdev=136.97 00:20:53.786 clat (usec): min=1456, max=44637, avg=10640.24, stdev=3829.77 00:20:53.786 lat (usec): min=1464, max=44658, avg=10680.14, stdev=3820.36 00:20:53.786 clat percentiles (usec): 00:20:53.786 | 1.00th=[ 5669], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9372], 00:20:53.786 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10290], 00:20:53.786 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11863], 95.00th=[14615], 00:20:53.786 | 99.00th=[39584], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:20:53.786 | 99.99th=[44827] 00:20:53.786 bw ( KiB/s): min=53248, max=62208, per=7.24%, avg=57728.00, stdev=6335.68, samples=2 00:20:53.786 iops : min= 416, max= 486, avg=451.00, stdev=49.50, samples=2 00:20:53.786 write: IOPS=415, BW=52.0MiB/s (54.5MB/s)(53.9MiB/1037msec); 0 zone resets 00:20:53.786 slat (usec): min=10, max=1050, avg=45.84, stdev=92.96 00:20:53.786 clat (usec): min=8285, max=99387, avg=65459.39, stdev=10026.80 00:20:53.786 lat (usec): min=8320, max=99418, avg=65505.23, stdev=10023.97 00:20:53.786 clat percentiles (usec): 00:20:53.786 | 1.00th=[22414], 5.00th=[48497], 10.00th=[57410], 20.00th=[62129], 00:20:53.786 | 30.00th=[64226], 40.00th=[65799], 50.00th=[66847], 60.00th=[67634], 00:20:53.786 | 70.00th=[69731], 80.00th=[70779], 90.00th=[72877], 95.00th=[74974], 00:20:53.786 | 99.00th=[91751], 99.50th=[96994], 99.90th=[99091], 99.95th=[99091], 00:20:53.786 | 99.99th=[99091] 00:20:53.786 bw ( KiB/s): min=50688, max=52992, per=6.38%, avg=51840.00, stdev=1629.17, samples=2 00:20:53.786 iops : min= 396, max= 414, avg=405.00, stdev=12.73, samples=2 00:20:53.786 lat (msec) : 2=0.23%, 4=0.23%, 10=26.16%, 20=24.58%, 50=2.71% 00:20:53.786 lat (msec) : 100=46.11% 00:20:53.786 cpu : usr=0.77%, sys=2.03%, ctx=744, majf=0, minf=1 00:20:53.786 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=96.5%, >=64=0.0% 00:20:53.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.786 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:53.786 issued rwts: total=456,431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.786 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:53.786 job11: (groupid=0, jobs=1): err= 0: pid=78101: Thu Jul 25 09:03:00 2024 00:20:53.786 read: IOPS=344, BW=43.1MiB/s (45.2MB/s)(44.9MiB/1041msec) 00:20:53.786 slat (usec): min=5, max=792, avg=40.69, stdev=96.56 00:20:53.786 clat (usec): min=1353, max=46517, avg=11475.05, stdev=2733.43 00:20:53.786 lat (usec): min=1366, max=46528, avg=11515.74, stdev=2731.07 00:20:53.786 clat percentiles (usec): 00:20:53.786 | 1.00th=[ 2311], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[10421], 00:20:53.786 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:20:53.786 | 70.00th=[11863], 80.00th=[12387], 90.00th=[13304], 95.00th=[14615], 00:20:53.786 | 99.00th=[17957], 99.50th=[18744], 99.90th=[46400], 99.95th=[46400], 00:20:53.786 | 99.99th=[46400] 00:20:53.786 bw ( KiB/s): min=40622, max=51046, per=5.75%, avg=45834.00, stdev=7370.88, samples=2 00:20:53.786 iops : min= 317, max= 398, avg=357.50, stdev=57.28, samples=2 00:20:53.786 write: IOPS=372, BW=46.6MiB/s (48.9MB/s)(48.5MiB/1041msec); 0 zone resets 00:20:53.786 slat (usec): min=8, max=1254, avg=51.64, stdev=102.99 00:20:53.786 clat (msec): min=11, max=106, avg=74.95, stdev=11.78 00:20:53.786 lat (msec): min=11, max=106, avg=75.00, stdev=11.78 00:20:53.786 clat percentiles (msec): 00:20:53.786 | 1.00th=[ 20], 5.00th=[ 55], 10.00th=[ 65], 20.00th=[ 70], 00:20:53.786 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 78], 60.00th=[ 79], 00:20:53.786 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 86], 95.00th=[ 89], 00:20:53.786 | 99.00th=[ 103], 99.50th=[ 104], 99.90th=[ 107], 99.95th=[ 107], 00:20:53.786 | 99.99th=[ 107] 00:20:53.786 bw ( KiB/s): min=43607, max=48031, per=5.64%, avg=45819.00, stdev=3128.24, samples=2 00:20:53.786 iops : min= 340, max= 375, avg=357.50, stdev=24.75, samples=2 00:20:53.786 lat (msec) : 2=0.27%, 4=0.40%, 10=5.35%, 20=42.44%, 50=1.87% 00:20:53.786 lat (msec) : 100=49.13%, 250=0.54% 00:20:53.786 cpu : usr=0.87%, sys=1.73%, ctx=683, majf=0, minf=1 00:20:53.786 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=95.9%, >=64=0.0% 00:20:53.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.786 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:53.786 issued rwts: total=359,388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.786 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:53.786 job12: (groupid=0, jobs=1): err= 0: pid=78103: Thu Jul 25 09:03:00 2024 00:20:53.786 read: IOPS=461, BW=57.7MiB/s (60.5MB/s)(60.4MiB/1046msec) 00:20:53.786 slat (usec): min=5, max=952, avg=30.51, stdev=69.36 00:20:53.786 clat (usec): min=2860, max=55321, avg=10411.32, stdev=4757.72 00:20:53.786 lat (usec): min=2870, max=55344, avg=10441.82, stdev=4755.95 00:20:53.786 clat percentiles (usec): 00:20:53.786 | 1.00th=[ 5407], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:20:53.786 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:20:53.786 | 70.00th=[10290], 80.00th=[10552], 90.00th=[11207], 95.00th=[12649], 00:20:53.786 | 99.00th=[46924], 99.50th=[52691], 99.90th=[55313], 99.95th=[55313], 00:20:53.786 | 99.99th=[55313] 00:20:53.786 bw ( KiB/s): min=60672, max=61440, per=7.66%, avg=61056.00, stdev=543.06, samples=2 00:20:53.786 iops : min= 474, max= 480, avg=477.00, stdev= 4.24, samples=2 00:20:53.786 write: IOPS=413, BW=51.7MiB/s (54.3MB/s)(54.1MiB/1046msec); 0 zone resets 00:20:53.786 slat (usec): min=10, max=527, avg=37.44, stdev=49.12 00:20:53.786 clat (msec): min=14, max=114, avg=65.44, stdev= 9.54 00:20:53.786 lat (msec): min=14, max=114, avg=65.47, stdev= 9.55 00:20:53.786 clat percentiles (msec): 00:20:53.786 | 1.00th=[ 28], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 63], 00:20:53.786 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 66], 60.00th=[ 67], 00:20:53.786 | 70.00th=[ 68], 80.00th=[ 69], 90.00th=[ 71], 95.00th=[ 74], 00:20:53.786 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 114], 99.95th=[ 114], 00:20:53.786 | 99.99th=[ 114] 00:20:53.786 bw ( KiB/s): min=50944, max=53504, per=6.43%, avg=52224.00, stdev=1810.19, samples=2 00:20:53.786 iops : min= 398, max= 418, avg=408.00, stdev=14.14, samples=2 00:20:53.786 lat (msec) : 4=0.22%, 10=33.41%, 20=18.78%, 50=1.31%, 100=45.52% 00:20:53.786 lat (msec) : 250=0.76% 00:20:53.786 cpu : usr=0.67%, sys=2.11%, ctx=855, majf=0, minf=1 00:20:53.786 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=96.6%, >=64=0.0% 00:20:53.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.786 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:53.786 issued rwts: total=483,433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.786 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:53.786 job13: (groupid=0, jobs=1): err= 0: pid=78104: Thu Jul 25 09:03:00 2024 00:20:53.786 read: IOPS=418, BW=52.3MiB/s (54.9MB/s)(54.2MiB/1037msec) 00:20:53.786 slat (usec): min=6, max=1241, avg=36.57, stdev=86.13 00:20:53.786 clat (usec): min=1009, max=43422, avg=10779.33, stdev=4493.30 00:20:53.786 lat (usec): min=1023, max=43463, avg=10815.90, stdev=4489.66 00:20:53.786 clat percentiles (usec): 00:20:53.786 | 1.00th=[ 5538], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 8979], 00:20:53.786 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10159], 00:20:53.786 | 70.00th=[10814], 80.00th=[11731], 90.00th=[13173], 95.00th=[15401], 00:20:53.786 | 99.00th=[40109], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:20:53.786 | 99.99th=[43254] 00:20:53.786 bw ( KiB/s): min=51200, max=58112, per=6.85%, avg=54656.00, stdev=4887.52, samples=2 00:20:53.786 iops : min= 400, max= 454, avg=427.00, stdev=38.18, samples=2 00:20:53.786 write: IOPS=404, BW=50.5MiB/s (53.0MB/s)(52.4MiB/1037msec); 0 zone resets 00:20:53.786 slat (usec): min=9, max=1803, avg=69.65, stdev=182.11 00:20:53.786 clat (usec): min=10768, max=99388, avg=67778.19, stdev=11459.47 00:20:53.786 lat (usec): min=10792, max=99411, avg=67847.84, stdev=11472.44 00:20:53.786 clat percentiles (usec): 00:20:53.786 | 1.00th=[26346], 5.00th=[53216], 10.00th=[57410], 20.00th=[61604], 00:20:53.786 | 30.00th=[63701], 40.00th=[64750], 50.00th=[66323], 60.00th=[68682], 00:20:53.786 | 70.00th=[71828], 80.00th=[77071], 90.00th=[81265], 95.00th=[85459], 00:20:53.786 | 99.00th=[95945], 99.50th=[95945], 99.90th=[99091], 99.95th=[99091], 00:20:53.786 | 99.99th=[99091] 00:20:53.786 bw ( KiB/s): min=45312, max=55808, per=6.23%, avg=50560.00, stdev=7421.79, samples=2 00:20:53.786 iops : min= 354, max= 436, avg=395.00, stdev=57.98, samples=2 00:20:53.786 lat (msec) : 2=0.23%, 4=0.23%, 10=28.37%, 20=21.45%, 50=2.70% 00:20:53.786 lat (msec) : 100=47.01% 00:20:53.786 cpu : usr=0.77%, sys=2.22%, ctx=721, majf=0, minf=1 00:20:53.786 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=96.4%, >=64=0.0% 00:20:53.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.786 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:53.787 issued rwts: total=434,419,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.787 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:53.787 job14: (groupid=0, jobs=1): err= 0: pid=78105: Thu Jul 25 09:03:00 2024 00:20:53.787 read: IOPS=397, BW=49.6MiB/s (52.1MB/s)(51.6MiB/1040msec) 00:20:53.787 slat (usec): min=6, max=443, avg=28.18, stdev=33.50 00:20:53.787 clat (usec): min=3706, max=48720, avg=10260.03, stdev=3622.90 00:20:53.787 lat (usec): min=3726, max=48735, avg=10288.21, stdev=3621.09 00:20:53.787 clat percentiles (usec): 00:20:53.787 | 1.00th=[ 6063], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9241], 00:20:53.787 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:20:53.787 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11207], 95.00th=[12125], 00:20:53.787 | 99.00th=[15664], 99.50th=[41681], 99.90th=[48497], 99.95th=[48497], 00:20:53.787 | 99.99th=[48497] 00:20:53.787 bw ( KiB/s): min=49920, max=54784, per=6.56%, avg=52352.00, stdev=3439.37, samples=2 00:20:53.787 iops : min= 390, max= 428, avg=409.00, stdev=26.87, samples=2 00:20:53.787 write: IOPS=415, BW=51.9MiB/s (54.4MB/s)(54.0MiB/1040msec); 0 zone resets 00:20:53.787 slat (usec): min=10, max=2867, avg=57.49, stdev=182.84 00:20:53.787 clat (msec): min=9, max=101, avg=67.01, stdev=10.08 00:20:53.787 lat (msec): min=9, max=101, avg=67.07, stdev=10.08 00:20:53.787 clat percentiles (msec): 00:20:53.787 | 1.00th=[ 23], 5.00th=[ 55], 10.00th=[ 62], 20.00th=[ 63], 00:20:53.787 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:20:53.787 | 70.00th=[ 71], 80.00th=[ 73], 90.00th=[ 77], 95.00th=[ 79], 00:20:53.787 | 99.00th=[ 95], 99.50th=[ 100], 99.90th=[ 102], 99.95th=[ 102], 00:20:53.787 | 99.99th=[ 102] 00:20:53.787 bw ( KiB/s): min=50944, max=52736, per=6.38%, avg=51840.00, stdev=1267.14, samples=2 00:20:53.787 iops : min= 398, max= 412, avg=405.00, stdev= 9.90, samples=2 00:20:53.787 lat (msec) : 4=0.12%, 10=28.17%, 20=20.59%, 50=2.25%, 100=48.76% 00:20:53.787 lat (msec) : 250=0.12% 00:20:53.787 cpu : usr=1.06%, sys=1.73%, ctx=729, majf=0, minf=1 00:20:53.787 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=96.3%, >=64=0.0% 00:20:53.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.787 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:53.787 issued rwts: total=413,432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.787 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:53.787 job15: (groupid=0, jobs=1): err= 0: pid=78106: Thu Jul 25 09:03:00 2024 00:20:53.787 read: IOPS=341, BW=42.7MiB/s (44.8MB/s)(45.1MiB/1057msec) 00:20:53.787 slat (usec): min=6, max=1035, avg=36.31, stdev=89.31 00:20:53.787 clat (usec): min=1870, max=65405, avg=11438.63, stdev=5935.70 00:20:53.787 lat (usec): min=1939, max=65413, avg=11474.94, stdev=5936.77 00:20:53.787 clat percentiles (usec): 00:20:53.787 | 1.00th=[ 3097], 5.00th=[ 5669], 10.00th=[ 7439], 20.00th=[10159], 00:20:53.787 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11338], 60.00th=[11600], 00:20:53.787 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12780], 95.00th=[14222], 00:20:53.787 | 99.00th=[57934], 99.50th=[65274], 99.90th=[65274], 99.95th=[65274], 00:20:53.787 | 99.99th=[65274] 00:20:53.787 bw ( KiB/s): min=43776, max=47711, per=5.74%, avg=45743.50, stdev=2782.47, samples=2 00:20:53.787 iops : min= 342, max= 372, avg=357.00, stdev=21.21, samples=2 00:20:53.787 write: IOPS=369, BW=46.2MiB/s (48.5MB/s)(48.9MiB/1057msec); 0 zone resets 00:20:53.787 slat (usec): min=9, max=1127, avg=52.10, stdev=95.92 00:20:53.787 clat (msec): min=6, max=131, avg=75.64, stdev=16.30 00:20:53.787 lat (msec): min=6, max=131, avg=75.69, stdev=16.31 00:20:53.787 clat percentiles (msec): 00:20:53.787 | 1.00th=[ 11], 5.00th=[ 40], 10.00th=[ 66], 20.00th=[ 72], 00:20:53.787 | 30.00th=[ 74], 40.00th=[ 77], 50.00th=[ 79], 60.00th=[ 80], 00:20:53.787 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 87], 95.00th=[ 91], 00:20:53.787 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 131], 99.95th=[ 131], 00:20:53.787 | 99.99th=[ 131] 00:20:53.787 bw ( KiB/s): min=45402, max=47616, per=5.73%, avg=46509.00, stdev=1565.53, samples=2 00:20:53.787 iops : min= 354, max= 372, avg=363.00, stdev=12.73, samples=2 00:20:53.787 lat (msec) : 2=0.27%, 4=0.80%, 10=7.85%, 20=39.63%, 50=2.13% 00:20:53.787 lat (msec) : 100=47.74%, 250=1.60% 00:20:53.787 cpu : usr=0.76%, sys=1.99%, ctx=696, majf=0, minf=1 00:20:53.787 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=95.9%, >=64=0.0% 00:20:53.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.787 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:53.787 issued rwts: total=361,391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.787 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:53.787 00:20:53.787 Run status group 0 (all jobs): 00:20:53.787 READ: bw=779MiB/s (817MB/s), 42.7MiB/s-57.7MiB/s (44.8MB/s-60.5MB/s), io=830MiB (870MB), run=1035-1065msec 00:20:53.787 WRITE: bw=793MiB/s (831MB/s), 46.0MiB/s-52.9MiB/s (48.2MB/s-55.5MB/s), io=845MiB (886MB), run=1035-1065msec 00:20:53.787 00:20:53.787 Disk stats (read/write): 00:20:53.787 sda: ios=420/380, merge=0/0, ticks=3649/24474, in_queue=28123, util=76.10% 00:20:53.787 sdb: ios=400/372, merge=0/0, ticks=3534/25189, in_queue=28723, util=78.20% 00:20:53.787 sdc: ios=457/371, merge=0/0, ticks=3981/24082, in_queue=28064, util=77.94% 00:20:53.787 sdd: ios=394/340, merge=0/0, ticks=3746/24790, in_queue=28536, util=79.41% 00:20:53.787 sde: ios=393/367, merge=0/0, ticks=3676/23896, in_queue=27572, util=76.92% 00:20:53.787 sdf: ios=422/371, merge=0/0, ticks=4036/24612, in_queue=28648, util=79.04% 00:20:53.787 sdg: ios=404/380, merge=0/0, ticks=3942/24312, in_queue=28255, util=80.47% 00:20:53.787 sdh: ios=316/316, merge=0/0, ticks=3523/23798, in_queue=27322, util=80.71% 00:20:53.787 sdi: ios=345/366, merge=0/0, ticks=3379/24222, in_queue=27602, util=81.64% 00:20:53.787 sdj: ios=397/349, merge=0/0, ticks=4048/23576, in_queue=27625, util=83.13% 00:20:53.787 sdk: ios=410/360, merge=0/0, ticks=4139/23286, in_queue=27426, util=83.26% 00:20:53.787 sdl: ios=331/316, merge=0/0, ticks=3730/23724, in_queue=27454, util=84.20% 00:20:53.787 sdm: ios=436/368, merge=0/0, ticks=4260/23493, in_queue=27753, util=85.09% 00:20:53.787 sdn: ios=396/350, merge=0/0, ticks=4019/23700, in_queue=27720, util=85.19% 00:20:53.787 sdo: ios=366/361, merge=0/0, ticks=3588/24018, in_queue=27607, util=85.35% 00:20:53.787 sdp: ios=343/335, merge=0/0, ticks=3627/24618, in_queue=28246, util=88.82% 00:20:53.787 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@82 -- # iscsicleanup 00:20:53.787 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:20:53.787 Cleaning up iSCSI connection 00:20:53.787 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:20:54.046 Logging out of session [sid: 22, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:20:54.046 Logging out of session [sid: 23, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:20:54.046 Logging out of session [sid: 24, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:20:54.046 Logging out of session [sid: 25, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:20:54.046 Logging out of session [sid: 26, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:20:54.046 Logging out of session [sid: 27, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:20:54.046 Logging out of session [sid: 28, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:20:54.046 Logging out of session [sid: 29, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:20:54.046 Logging out of session [sid: 30, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:20:54.046 Logging out of session [sid: 31, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:20:54.046 Logging out of session [sid: 32, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:20:54.046 Logging out of session [sid: 33, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:20:54.046 Logging out of session [sid: 34, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:20:54.046 Logging out of session [sid: 35, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:20:54.046 Logging out of session [sid: 36, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:20:54.046 Logging out of session [sid: 37, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:20:54.046 Logout of [sid: 22, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:20:54.046 Logout of [sid: 23, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:20:54.046 Logout of [sid: 24, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:20:54.046 Logout of [sid: 25, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:20:54.046 Logout of [sid: 26, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:20:54.046 Logout of [sid: 27, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:20:54.046 Logout of [sid: 28, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:20:54.046 Logout of [sid: 29, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:20:54.046 Logout of [sid: 30, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:20:54.046 Logout of [sid: 31, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:20:54.046 Logout of [sid: 32, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:20:54.046 Logout of [sid: 33, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:20:54.046 Logout of [sid: 34, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:20:54.046 Logout of [sid: 35, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:20:54.046 Logout of [sid: 36, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:20:54.046 Logout of [sid: 37, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@985 -- # rm -rf 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@84 -- # RPCS= 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # seq 0 15 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target0\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc0\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target1\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc1\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target2\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc2\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target3\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc3\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target4\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc4\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target5\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc5\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target6\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc6\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target7\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc7\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target8\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc8\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target9\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc9\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target10\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc10\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target11\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc11\n' 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:54.046 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target12\n' 00:20:54.047 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc12\n' 00:20:54.047 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:54.047 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target13\n' 00:20:54.047 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc13\n' 00:20:54.047 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:54.047 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target14\n' 00:20:54.047 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc14\n' 00:20:54.047 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:54.047 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target15\n' 00:20:54.047 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc15\n' 00:20:54.047 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # echo -e iscsi_delete_target_node 'iqn.2016-06.io.spdk:Target0\nbdev_malloc_delete' 'Malloc0\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target1\nbdev_malloc_delete' 'Malloc1\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target2\nbdev_malloc_delete' 'Malloc2\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target3\nbdev_malloc_delete' 'Malloc3\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target4\nbdev_malloc_delete' 'Malloc4\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target5\nbdev_malloc_delete' 'Malloc5\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target6\nbdev_malloc_delete' 'Malloc6\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target7\nbdev_malloc_delete' 'Malloc7\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target8\nbdev_malloc_delete' 'Malloc8\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target9\nbdev_malloc_delete' 'Malloc9\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target10\nbdev_malloc_delete' 'Malloc10\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target11\nbdev_malloc_delete' 'Malloc11\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target12\nbdev_malloc_delete' 'Malloc12\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target13\nbdev_malloc_delete' 'Malloc13\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target14\nbdev_malloc_delete' 'Malloc14\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target15\nbdev_malloc_delete' 'Malloc15\n' 00:20:54.047 09:03:00 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:58.232 09:03:04 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@92 -- # trap 'delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:58.232 09:03:04 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@94 -- # killprocess 77563 00:20:58.232 09:03:04 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@950 -- # '[' -z 77563 ']' 00:20:58.232 09:03:04 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # kill -0 77563 00:20:58.232 09:03:04 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@955 -- # uname 00:20:58.232 09:03:04 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:58.232 09:03:04 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77563 00:20:58.232 killing process with pid 77563 00:20:58.232 09:03:04 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:58.232 09:03:04 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:58.232 09:03:04 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77563' 00:20:58.232 09:03:04 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@969 -- # kill 77563 00:20:58.232 09:03:04 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@974 -- # wait 77563 00:21:01.519 09:03:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@95 -- # killprocess 77598 00:21:01.519 09:03:07 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@950 -- # '[' -z 77598 ']' 00:21:01.519 09:03:07 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # kill -0 77598 00:21:01.520 09:03:07 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@955 -- # uname 00:21:01.520 09:03:07 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:01.520 09:03:07 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77598 00:21:01.520 killing process with pid 77598 00:21:01.520 09:03:08 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@956 -- # process_name=spdk_trace_reco 00:21:01.520 09:03:08 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@960 -- # '[' spdk_trace_reco = sudo ']' 00:21:01.520 09:03:08 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77598' 00:21:01.520 09:03:08 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@969 -- # kill 77598 00:21:01.520 09:03:08 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@974 -- # wait 77598 00:21:01.520 09:03:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@96 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace -f ./tmp-trace/record.trace 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # grep 'trace entries for lcore' ./tmp-trace/record.notice 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # cut -d ' ' -f 2 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # record_num='124840 00:21:13.730 123104 00:21:13.730 121110 00:21:13.730 109320' 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # grep 'Trace Size of lcore' ./tmp-trace/trace.log 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # cut -d ' ' -f 6 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # trace_tool_num='124840 00:21:13.730 123104 00:21:13.730 121110 00:21:13.730 109320' 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@105 -- # delete_tmp_files 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@19 -- # rm -rf ./tmp-trace 00:21:13.730 entries numbers from trace record are: 124840 123104 121110 109320 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@107 -- # echo 'entries numbers from trace record are:' 124840 123104 121110 109320 00:21:13.730 entries numbers from trace tool are: 124840 123104 121110 109320 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@108 -- # echo 'entries numbers from trace tool are:' 124840 123104 121110 109320 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@110 -- # arr_record_num=($record_num) 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@111 -- # arr_trace_tool_num=($trace_tool_num) 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@112 -- # len_arr_record_num=4 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@113 -- # len_arr_trace_tool_num=4 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@116 -- # '[' 4 -ne 4 ']' 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # seq 0 3 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 124840 -le 4096 ']' 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 124840 -ne 124840 ']' 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 123104 -le 4096 ']' 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 123104 -ne 123104 ']' 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 121110 -le 4096 ']' 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 121110 -ne 121110 ']' 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 109320 -le 4096 ']' 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 109320 -ne 109320 ']' 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@135 -- # trap - SIGINT SIGTERM EXIT 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@136 -- # iscsitestfini 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:21:13.730 00:21:13.730 real 0m26.990s 00:21:13.730 user 1m16.755s 00:21:13.730 sys 0m3.667s 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:21:13.730 ************************************ 00:21:13.730 END TEST iscsi_tgt_trace_record 00:21:13.730 ************************************ 00:21:13.730 09:03:20 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@41 -- # run_test iscsi_tgt_login_redirection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:21:13.730 09:03:20 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:13.730 09:03:20 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:13.730 09:03:20 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:21:13.730 ************************************ 00:21:13.730 START TEST iscsi_tgt_login_redirection 00:21:13.730 ************************************ 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:21:13.730 * Looking for test storage... 00:21:13.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@12 -- # iscsitestinit 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@14 -- # NULL_BDEV_SIZE=64 00:21:13.730 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@17 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@20 -- # rpc_addr1=/var/tmp/spdk0.sock 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@21 -- # rpc_addr2=/var/tmp/spdk1.sock 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@25 -- # timing_enter start_iscsi_tgts 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@28 -- # pid1=78502 00:21:13.731 Process pid: 78502 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@29 -- # echo 'Process pid: 78502' 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@27 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -i 0 -m 0x1 --wait-for-rpc 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@32 -- # pid2=78503 00:21:13.731 Process pid: 78503 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@33 -- # echo 'Process pid: 78503' 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -i 1 -m 0x2 --wait-for-rpc 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@35 -- # trap 'killprocess $pid1; killprocess $pid2; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@37 -- # waitforlisten 78502 /var/tmp/spdk0.sock 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@831 -- # '[' -z 78502 ']' 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk0.sock 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.731 09:03:20 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:21:13.731 [2024-07-25 09:03:20.582888] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:13.731 [2024-07-25 09:03:20.583032] [ DPDK EAL parameters: iscsi -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.731 [2024-07-25 09:03:20.585458] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:13.731 [2024-07-25 09:03:20.585627] [ DPDK EAL parameters: iscsi -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:13.731 [2024-07-25 09:03:20.767943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.731 [2024-07-25 09:03:20.768766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.990 [2024-07-25 09:03:21.063809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.990 [2024-07-25 09:03:21.086154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.560 09:03:21 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:14.560 09:03:21 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@864 -- # return 0 00:21:14.560 09:03:21 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_set_options -w 0 -o 30 -a 16 00:21:14.560 09:03:21 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock framework_start_init 00:21:15.939 iscsi_tgt_1 is listening. 00:21:15.939 09:03:22 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@40 -- # echo 'iscsi_tgt_1 is listening.' 00:21:15.939 09:03:22 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@42 -- # waitforlisten 78503 /var/tmp/spdk1.sock 00:21:15.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:21:15.939 09:03:22 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@831 -- # '[' -z 78503 ']' 00:21:15.939 09:03:22 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk1.sock 00:21:15.939 09:03:22 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:15.940 09:03:22 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:21:15.940 09:03:22 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:15.940 09:03:22 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:21:15.940 09:03:23 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:15.940 09:03:23 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@864 -- # return 0 00:21:15.940 09:03:23 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_set_options -w 0 -o 30 -a 16 00:21:16.200 09:03:23 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock framework_start_init 00:21:17.583 iscsi_tgt_2 is listening. 00:21:17.583 09:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@45 -- # echo 'iscsi_tgt_2 is listening.' 00:21:17.583 09:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@47 -- # timing_exit start_iscsi_tgts 00:21:17.583 09:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:17.583 09:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:21:17.583 09:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:21:17.841 09:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 10.0.0.1:3260 00:21:17.841 09:03:24 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock bdev_null_create Null0 64 512 00:21:18.099 Null0 00:21:18.099 09:03:25 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:21:18.358 09:03:25 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:21:18.358 09:03:25 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 10.0.0.3:3260 -p 00:21:18.617 09:03:25 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock bdev_null_create Null0 64 512 00:21:18.876 Null0 00:21:18.876 09:03:25 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@67 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:21:19.136 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@68 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:21:19.136 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:21:19.136 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@69 -- # waitforiscsidevices 1 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@116 -- # local num=1 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:19.136 [2024-07-25 09:03:26.075702] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # n=1 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@123 -- # return 0 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@72 -- # fiopid=78616 00:21:19.136 FIO pid: 78616 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@73 -- # echo 'FIO pid: 78616' 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@75 -- # trap 'iscsicleanup; killprocess $pid1; killprocess $pid2; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t randrw -r 15 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:21:19.136 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # jq length 00:21:19.136 [global] 00:21:19.136 thread=1 00:21:19.136 invalidate=1 00:21:19.136 rw=randrw 00:21:19.136 time_based=1 00:21:19.136 runtime=15 00:21:19.136 ioengine=libaio 00:21:19.136 direct=1 00:21:19.136 bs=512 00:21:19.136 iodepth=1 00:21:19.136 norandommap=1 00:21:19.136 numjobs=1 00:21:19.136 00:21:19.136 [job0] 00:21:19.136 filename=/dev/sda 00:21:19.136 queue_depth set to 113 (sda) 00:21:19.396 job0: (g=0): rw=randrw, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:21:19.396 fio-3.35 00:21:19.396 Starting 1 thread 00:21:19.396 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # '[' 1 = 1 ']' 00:21:19.396 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:21:19.396 [2024-07-25 09:03:26.295774] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:19.396 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # jq length 00:21:19.656 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # '[' 0 = 0 ']' 00:21:19.656 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 -a 10.0.0.3 -p 3260 00:21:19.656 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:21:19.916 09:03:26 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@85 -- # sleep 5 00:21:25.192 09:03:31 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:21:25.192 09:03:31 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # jq length 00:21:25.192 09:03:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # '[' 0 = 0 ']' 00:21:25.192 09:03:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:21:25.192 09:03:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # jq length 00:21:25.464 09:03:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # '[' 1 = 1 ']' 00:21:25.464 09:03:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 00:21:25.723 09:03:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:21:25.723 09:03:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@93 -- # sleep 5 00:21:31.032 09:03:37 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:21:31.032 09:03:37 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # jq length 00:21:31.032 09:03:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # '[' 1 = 1 ']' 00:21:31.032 09:03:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:21:31.032 09:03:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # jq length 00:21:31.297 09:03:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # '[' 0 = 0 ']' 00:21:31.297 09:03:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@98 -- # wait 78616 00:21:34.589 [2024-07-25 09:03:41.400331] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.589 00:21:34.589 job0: (groupid=0, jobs=1): err= 0: pid=78647: Thu Jul 25 09:03:41 2024 00:21:34.589 read: IOPS=5889, BW=2945KiB/s (3015kB/s)(43.1MiB/15001msec) 00:21:34.589 slat (usec): min=2, max=2366, avg= 4.43, stdev= 8.10 00:21:34.589 clat (usec): min=3, max=2007.9k, avg=101.65, stdev=9550.89 00:21:34.589 lat (usec): min=46, max=2007.9k, avg=106.07, stdev=9550.90 00:21:34.589 clat percentiles (usec): 00:21:34.589 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 51], 00:21:34.589 | 30.00th=[ 52], 40.00th=[ 53], 50.00th=[ 55], 60.00th=[ 56], 00:21:34.589 | 70.00th=[ 59], 80.00th=[ 62], 90.00th=[ 67], 95.00th=[ 72], 00:21:34.589 | 99.00th=[ 84], 99.50th=[ 91], 99.90th=[ 113], 99.95th=[ 133], 00:21:34.589 | 99.99th=[ 734] 00:21:34.589 bw ( KiB/s): min= 373, max= 4348, per=100.00%, avg=3670.00, stdev=1064.76, samples=23 00:21:34.589 iops : min= 746, max= 8696, avg=7340.09, stdev=2129.53, samples=23 00:21:34.589 write: IOPS=5873, BW=2937KiB/s (3007kB/s)(43.0MiB/15001msec); 0 zone resets 00:21:34.589 slat (usec): min=2, max=3981, avg= 4.38, stdev=13.75 00:21:34.589 clat (nsec): min=1042, max=1299.9k, avg=58509.81, stdev=11490.16 00:21:34.590 lat (usec): min=47, max=3984, avg=62.89, stdev=17.99 00:21:34.590 clat percentiles (usec): 00:21:34.590 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 53], 00:21:34.590 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 59], 00:21:34.590 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 70], 95.00th=[ 75], 00:21:34.590 | 99.00th=[ 88], 99.50th=[ 94], 99.90th=[ 120], 99.95th=[ 139], 00:21:34.590 | 99.99th=[ 396] 00:21:34.590 bw ( KiB/s): min= 373, max= 4450, per=100.00%, avg=3657.39, stdev=1058.61, samples=23 00:21:34.590 iops : min= 746, max= 8900, avg=7314.87, stdev=2117.25, samples=23 00:21:34.590 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=12.91% 00:21:34.590 lat (usec) : 100=86.83%, 250=0.24%, 500=0.01%, 750=0.01%, 1000=0.01% 00:21:34.590 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:21:34.590 cpu : usr=1.71%, sys=7.14%, ctx=177190, majf=0, minf=1 00:21:34.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:34.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.590 issued rwts: total=88347,88105,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:34.590 00:21:34.590 Run status group 0 (all jobs): 00:21:34.590 READ: bw=2945KiB/s (3015kB/s), 2945KiB/s-2945KiB/s (3015kB/s-3015kB/s), io=43.1MiB (45.2MB), run=15001-15001msec 00:21:34.590 WRITE: bw=2937KiB/s (3007kB/s), 2937KiB/s-2937KiB/s (3007kB/s-3007kB/s), io=43.0MiB (45.1MB), run=15001-15001msec 00:21:34.590 00:21:34.590 Disk stats (read/write): 00:21:34.590 sda: ios=87486/87189, merge=0/0, ticks=8921/5087, in_queue=14008, util=99.45% 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@100 -- # trap - SIGINT SIGTERM EXIT 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@102 -- # iscsicleanup 00:21:34.590 Cleaning up iSCSI connection 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:21:34.590 Logging out of session [sid: 38, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:21:34.590 Logout of [sid: 38, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@985 -- # rm -rf 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@103 -- # killprocess 78502 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@950 -- # '[' -z 78502 ']' 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # kill -0 78502 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@955 -- # uname 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78502 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:34.590 killing process with pid 78502 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78502' 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@969 -- # kill 78502 00:21:34.590 09:03:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@974 -- # wait 78502 00:21:37.879 09:03:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@104 -- # killprocess 78503 00:21:37.879 09:03:44 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@950 -- # '[' -z 78503 ']' 00:21:37.879 09:03:44 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # kill -0 78503 00:21:37.879 09:03:44 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@955 -- # uname 00:21:37.879 09:03:44 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:37.879 09:03:44 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78503 00:21:37.879 09:03:44 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:37.879 09:03:44 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:37.879 killing process with pid 78503 00:21:37.879 09:03:44 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78503' 00:21:37.879 09:03:44 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@969 -- # kill 78503 00:21:37.879 09:03:44 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@974 -- # wait 78503 00:21:40.416 09:03:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@105 -- # iscsitestfini 00:21:40.416 09:03:47 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:21:40.416 00:21:40.416 real 0m26.927s 00:21:40.416 user 0m49.772s 00:21:40.416 sys 0m6.063s 00:21:40.416 09:03:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:40.416 09:03:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:21:40.416 ************************************ 00:21:40.416 END TEST iscsi_tgt_login_redirection 00:21:40.416 ************************************ 00:21:40.417 09:03:47 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@42 -- # run_test iscsi_tgt_digests /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:21:40.417 09:03:47 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:40.417 09:03:47 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:40.417 09:03:47 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:21:40.417 ************************************ 00:21:40.417 START TEST iscsi_tgt_digests 00:21:40.417 ************************************ 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:21:40.417 * Looking for test storage... 00:21:40.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@11 -- # iscsitestinit 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@49 -- # MALLOC_BDEV_SIZE=64 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@52 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@54 -- # timing_enter start_iscsi_tgt 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@57 -- # pid=78950 00:21:40.417 Process pid: 78950 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@58 -- # echo 'Process pid: 78950' 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@56 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@60 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@62 -- # waitforlisten 78950 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@831 -- # '[' -z 78950 ']' 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.417 09:03:47 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:40.676 [2024-07-25 09:03:47.591497] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:40.676 [2024-07-25 09:03:47.591678] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78950 ] 00:21:40.676 [2024-07-25 09:03:47.763597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.245 [2024-07-25 09:03:48.063007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.245 [2024-07-25 09:03:48.063118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.245 [2024-07-25 09:03:48.063267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.245 [2024-07-25 09:03:48.063344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.245 09:03:48 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.245 09:03:48 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@864 -- # return 0 00:21:41.245 09:03:48 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@63 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:21:41.245 09:03:48 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.245 09:03:48 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:41.245 09:03:48 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.245 09:03:48 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@64 -- # rpc_cmd framework_start_init 00:21:41.245 09:03:48 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.245 09:03:48 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:42.625 iscsi_tgt is listening. Running tests... 00:21:42.625 09:03:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.625 09:03:49 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@65 -- # echo 'iscsi_tgt is listening. Running tests...' 00:21:42.625 09:03:49 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@67 -- # timing_exit start_iscsi_tgt 00:21:42.625 09:03:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:42.625 09:03:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:42.625 09:03:49 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@69 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:21:42.625 09:03:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.625 09:03:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:42.625 09:03:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.625 09:03:49 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@70 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:21:42.625 09:03:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.625 09:03:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:42.625 09:03:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.625 09:03:49 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@71 -- # rpc_cmd bdev_malloc_create 64 512 00:21:42.625 09:03:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.625 09:03:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:42.626 Malloc0 00:21:42.626 09:03:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.626 09:03:49 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@76 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:21:42.626 09:03:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.626 09:03:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:42.626 09:03:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.626 09:03:49 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@77 -- # sleep 1 00:21:43.564 09:03:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@79 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:21:43.564 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:21:43.564 09:03:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.DataDigest' -v None 00:21:43.564 09:03:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # true 00:21:43.564 09:03:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # DataDigestAbility='iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:21:43.564 iscsiadm: Could not execute operation on all records: invalid parameter' 00:21:43.564 09:03:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@84 -- # '[' 'iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:21:43.564 iscsiadm: Could not execute operation on all records: invalid parameterx' '!=' x ']' 00:21:43.564 09:03:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@85 -- # run_test iscsi_tgt_digest iscsi_header_digest_test 00:21:43.565 09:03:50 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:43.565 09:03:50 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:43.565 09:03:50 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:43.565 ************************************ 00:21:43.565 START TEST iscsi_tgt_digest 00:21:43.565 ************************************ 00:21:43.565 09:03:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1125 -- # iscsi_header_digest_test 00:21:43.565 09:03:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@27 -- # node_login_fio_logout 'HeaderDigest -v CRC32C' 00:21:43.565 09:03:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:21:43.565 09:03:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C 00:21:43.565 09:03:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:21:43.565 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:21:43.565 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:21:43.825 09:03:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:21:43.825 09:03:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:21:43.825 09:03:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:43.825 09:03:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:43.825 09:03:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:43.825 09:03:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:43.825 [2024-07-25 09:03:50.692332] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:43.825 09:03:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:21:43.825 09:03:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:21:43.825 09:03:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:21:43.825 09:03:50 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:21:43.825 [global] 00:21:43.825 thread=1 00:21:43.825 invalidate=1 00:21:43.825 rw=write 00:21:43.825 time_based=1 00:21:43.825 runtime=2 00:21:43.825 ioengine=libaio 00:21:43.825 direct=1 00:21:43.825 bs=512 00:21:43.825 iodepth=1 00:21:43.825 norandommap=1 00:21:43.825 numjobs=1 00:21:43.825 00:21:43.825 [job0] 00:21:43.825 filename=/dev/sda 00:21:43.825 queue_depth set to 113 (sda) 00:21:43.825 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:21:43.825 fio-3.35 00:21:43.825 Starting 1 thread 00:21:43.825 [2024-07-25 09:03:50.900288] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:46.360 [2024-07-25 09:03:53.011289] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:46.360 00:21:46.360 job0: (groupid=0, jobs=1): err= 0: pid=79059: Thu Jul 25 09:03:53 2024 00:21:46.360 write: IOPS=12.6k, BW=6324KiB/s (6476kB/s)(12.4MiB/2000msec); 0 zone resets 00:21:46.360 slat (usec): min=2, max=110, avg= 5.57, stdev= 4.93 00:21:46.360 clat (usec): min=11, max=3721, avg=72.84, stdev=40.04 00:21:46.360 lat (usec): min=61, max=3725, avg=78.41, stdev=40.09 00:21:46.360 clat percentiles (usec): 00:21:46.360 | 1.00th=[ 53], 5.00th=[ 58], 10.00th=[ 63], 20.00th=[ 66], 00:21:46.360 | 30.00th=[ 68], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:21:46.360 | 70.00th=[ 76], 80.00th=[ 79], 90.00th=[ 84], 95.00th=[ 89], 00:21:46.360 | 99.00th=[ 105], 99.50th=[ 119], 99.90th=[ 151], 99.95th=[ 190], 00:21:46.360 | 99.99th=[ 3326] 00:21:46.360 bw ( KiB/s): min= 6030, max= 6736, per=100.00%, avg=6338.67, stdev=361.26, samples=3 00:21:46.360 iops : min=12061, max=13472, avg=12678.00, stdev=721.96, samples=3 00:21:46.360 lat (usec) : 20=0.01%, 50=0.15%, 100=98.37%, 250=1.44%, 500=0.02% 00:21:46.360 lat (msec) : 2=0.01%, 4=0.01% 00:21:46.360 cpu : usr=3.50%, sys=6.75%, ctx=28859, majf=0, minf=1 00:21:46.360 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:46.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.361 issued rwts: total=0,25295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.361 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:46.361 00:21:46.361 Run status group 0 (all jobs): 00:21:46.361 WRITE: bw=6324KiB/s (6476kB/s), 6324KiB/s-6324KiB/s (6476kB/s-6476kB/s), io=12.4MiB (13.0MB), run=2000-2000msec 00:21:46.361 00:21:46.361 Disk stats (read/write): 00:21:46.361 sda: ios=48/23885, merge=0/0, ticks=9/1731, in_queue=1741, util=95.57% 00:21:46.361 09:03:53 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:21:46.361 [global] 00:21:46.361 thread=1 00:21:46.361 invalidate=1 00:21:46.361 rw=read 00:21:46.361 time_based=1 00:21:46.361 runtime=2 00:21:46.361 ioengine=libaio 00:21:46.361 direct=1 00:21:46.361 bs=512 00:21:46.361 iodepth=1 00:21:46.361 norandommap=1 00:21:46.361 numjobs=1 00:21:46.361 00:21:46.361 [job0] 00:21:46.361 filename=/dev/sda 00:21:46.361 queue_depth set to 113 (sda) 00:21:46.361 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:21:46.361 fio-3.35 00:21:46.361 Starting 1 thread 00:21:48.262 00:21:48.262 job0: (groupid=0, jobs=1): err= 0: pid=79113: Thu Jul 25 09:03:55 2024 00:21:48.262 read: IOPS=13.7k, BW=6857KiB/s (7022kB/s)(13.4MiB/2001msec) 00:21:48.262 slat (nsec): min=3160, max=75030, avg=4594.12, stdev=1678.38 00:21:48.262 clat (usec): min=9, max=2148, avg=67.79, stdev=16.02 00:21:48.262 lat (usec): min=56, max=2157, avg=72.38, stdev=16.34 00:21:48.262 clat percentiles (usec): 00:21:48.262 | 1.00th=[ 57], 5.00th=[ 59], 10.00th=[ 60], 20.00th=[ 62], 00:21:48.262 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 69], 00:21:48.262 | 70.00th=[ 71], 80.00th=[ 73], 90.00th=[ 77], 95.00th=[ 82], 00:21:48.262 | 99.00th=[ 93], 99.50th=[ 99], 99.90th=[ 123], 99.95th=[ 180], 00:21:48.262 | 99.99th=[ 437] 00:21:48.262 bw ( KiB/s): min= 6681, max= 7179, per=100.00%, avg=6886.33, stdev=260.23, samples=3 00:21:48.262 iops : min=13362, max=14358, avg=13772.67, stdev=520.47, samples=3 00:21:48.262 lat (usec) : 10=0.01%, 100=99.57%, 250=0.38%, 500=0.04%, 750=0.01% 00:21:48.262 lat (msec) : 4=0.01% 00:21:48.262 cpu : usr=2.05%, sys=9.85%, ctx=27445, majf=0, minf=1 00:21:48.262 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:48.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.262 issued rwts: total=27443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.262 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:48.262 00:21:48.262 Run status group 0 (all jobs): 00:21:48.262 READ: bw=6857KiB/s (7022kB/s), 6857KiB/s-6857KiB/s (7022kB/s-7022kB/s), io=13.4MiB (14.1MB), run=2001-2001msec 00:21:48.262 00:21:48.262 Disk stats (read/write): 00:21:48.262 sda: ios=25989/0, merge=0/0, ticks=1734/0, in_queue=1734, util=95.08% 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:21:48.522 Logging out of session [sid: 39, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:21:48.522 Logout of [sid: 39, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:48.522 iscsiadm: No active sessions. 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@31 -- # node_login_fio_logout 'HeaderDigest -v CRC32C,None' 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C,None 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:21:48.522 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:21:48.522 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:48.522 [2024-07-25 09:03:55.519720] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:21:48.522 09:03:55 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:21:48.522 [global] 00:21:48.522 thread=1 00:21:48.522 invalidate=1 00:21:48.522 rw=write 00:21:48.522 time_based=1 00:21:48.522 runtime=2 00:21:48.522 ioengine=libaio 00:21:48.522 direct=1 00:21:48.522 bs=512 00:21:48.522 iodepth=1 00:21:48.522 norandommap=1 00:21:48.522 numjobs=1 00:21:48.522 00:21:48.522 [job0] 00:21:48.522 filename=/dev/sda 00:21:48.522 queue_depth set to 113 (sda) 00:21:48.780 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:21:48.780 fio-3.35 00:21:48.780 Starting 1 thread 00:21:48.780 [2024-07-25 09:03:55.730928] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:51.309 [2024-07-25 09:03:57.836091] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:51.309 00:21:51.309 job0: (groupid=0, jobs=1): err= 0: pid=79184: Thu Jul 25 09:03:57 2024 00:21:51.309 write: IOPS=10.9k, BW=5434KiB/s (5564kB/s)(10.6MiB/2001msec); 0 zone resets 00:21:51.309 slat (usec): min=3, max=649, avg= 6.04, stdev= 6.12 00:21:51.309 clat (nsec): min=905, max=2107.5k, avg=85290.32, stdev=22059.32 00:21:51.309 lat (usec): min=70, max=2118, avg=91.33, stdev=22.79 00:21:51.309 clat percentiles (usec): 00:21:51.309 | 1.00th=[ 67], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 79], 00:21:51.309 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 86], 00:21:51.309 | 70.00th=[ 88], 80.00th=[ 91], 90.00th=[ 97], 95.00th=[ 103], 00:21:51.309 | 99.00th=[ 119], 99.50th=[ 127], 99.90th=[ 202], 99.95th=[ 314], 00:21:51.309 | 99.99th=[ 685] 00:21:51.309 bw ( KiB/s): min= 5422, max= 5606, per=100.00%, avg=5498.00, stdev=96.08, samples=3 00:21:51.309 iops : min=10844, max=11212, avg=10996.00, stdev=192.17, samples=3 00:21:51.309 lat (nsec) : 1000=0.01% 00:21:51.309 lat (usec) : 2=0.01%, 50=0.01%, 100=93.10%, 250=6.79%, 500=0.06% 00:21:51.309 lat (usec) : 750=0.01% 00:21:51.309 lat (msec) : 2=0.01%, 4=0.01% 00:21:51.309 cpu : usr=1.70%, sys=8.60%, ctx=22029, majf=0, minf=1 00:21:51.309 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:51.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.309 issued rwts: total=0,21747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.309 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:51.309 00:21:51.309 Run status group 0 (all jobs): 00:21:51.309 WRITE: bw=5434KiB/s (5564kB/s), 5434KiB/s-5434KiB/s (5564kB/s-5564kB/s), io=10.6MiB (11.1MB), run=2001-2001msec 00:21:51.309 00:21:51.309 Disk stats (read/write): 00:21:51.309 sda: ios=48/20584, merge=0/0, ticks=9/1754, in_queue=1764, util=95.63% 00:21:51.309 09:03:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:21:51.309 [global] 00:21:51.309 thread=1 00:21:51.309 invalidate=1 00:21:51.309 rw=read 00:21:51.309 time_based=1 00:21:51.309 runtime=2 00:21:51.309 ioengine=libaio 00:21:51.309 direct=1 00:21:51.309 bs=512 00:21:51.309 iodepth=1 00:21:51.309 norandommap=1 00:21:51.309 numjobs=1 00:21:51.309 00:21:51.309 [job0] 00:21:51.309 filename=/dev/sda 00:21:51.309 queue_depth set to 113 (sda) 00:21:51.309 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:21:51.309 fio-3.35 00:21:51.309 Starting 1 thread 00:21:53.208 00:21:53.208 job0: (groupid=0, jobs=1): err= 0: pid=79238: Thu Jul 25 09:04:00 2024 00:21:53.208 read: IOPS=12.0k, BW=5989KiB/s (6132kB/s)(11.7MiB/2000msec) 00:21:53.208 slat (usec): min=3, max=133, avg= 5.72, stdev= 2.29 00:21:53.208 clat (usec): min=58, max=4094, avg=77.21, stdev=48.24 00:21:53.208 lat (usec): min=66, max=4102, avg=82.92, stdev=48.45 00:21:53.208 clat percentiles (usec): 00:21:53.208 | 1.00th=[ 65], 5.00th=[ 67], 10.00th=[ 69], 20.00th=[ 71], 00:21:53.208 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 76], 60.00th=[ 77], 00:21:53.208 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 86], 95.00th=[ 91], 00:21:53.208 | 99.00th=[ 104], 99.50th=[ 112], 99.90th=[ 155], 99.95th=[ 429], 00:21:53.208 | 99.99th=[ 3130] 00:21:53.208 bw ( KiB/s): min= 5910, max= 6106, per=100.00%, avg=6020.33, stdev=100.30, samples=3 00:21:53.208 iops : min=11820, max=12212, avg=12040.00, stdev=200.36, samples=3 00:21:53.208 lat (usec) : 100=98.39%, 250=1.55%, 500=0.02%, 750=0.01%, 1000=0.01% 00:21:53.208 lat (msec) : 4=0.02%, 10=0.01% 00:21:53.208 cpu : usr=2.55%, sys=9.15%, ctx=23954, majf=0, minf=1 00:21:53.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:53.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.208 issued rwts: total=23954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:53.208 00:21:53.208 Run status group 0 (all jobs): 00:21:53.208 READ: bw=5989KiB/s (6132kB/s), 5989KiB/s-5989KiB/s (6132kB/s-6132kB/s), io=11.7MiB (12.3MB), run=2000-2000msec 00:21:53.208 00:21:53.208 Disk stats (read/write): 00:21:53.208 sda: ios=22657/0, merge=0/0, ticks=1722/0, in_queue=1722, util=94.48% 00:21:53.208 09:04:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:21:53.208 Logging out of session [sid: 40, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:21:53.208 Logout of [sid: 40, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:21:53.208 09:04:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:21:53.208 09:04:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:21:53.208 09:04:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:53.208 09:04:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:53.208 09:04:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:53.208 09:04:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:53.208 iscsiadm: No active sessions. 00:21:53.208 09:04:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:21:53.208 09:04:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:21:53.208 09:04:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:21:53.208 09:04:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:21:53.208 00:21:53.208 real 0m9.672s 00:21:53.208 user 0m0.773s 00:21:53.208 sys 0m1.086s 00:21:53.208 09:04:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:53.208 09:04:00 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@10 -- # set +x 00:21:53.208 ************************************ 00:21:53.208 END TEST iscsi_tgt_digest 00:21:53.208 ************************************ 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@92 -- # iscsicleanup 00:21:53.466 Cleaning up iSCSI connection 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:21:53.466 iscsiadm: No matching sessions found 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@983 -- # true 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@985 -- # rm -rf 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@93 -- # killprocess 78950 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@950 -- # '[' -z 78950 ']' 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # kill -0 78950 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@955 -- # uname 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78950 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:53.466 killing process with pid 78950 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78950' 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@969 -- # kill 78950 00:21:53.466 09:04:00 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@974 -- # wait 78950 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@94 -- # iscsitestfini 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:21:56.744 00:21:56.744 real 0m16.330s 00:21:56.744 user 0m58.087s 00:21:56.744 sys 0m3.714s 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:56.744 ************************************ 00:21:56.744 END TEST iscsi_tgt_digests 00:21:56.744 ************************************ 00:21:56.744 09:04:03 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@43 -- # run_test iscsi_tgt_fuzz /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:21:56.744 09:04:03 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:56.744 09:04:03 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:56.744 09:04:03 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:21:56.744 ************************************ 00:21:56.744 START TEST iscsi_tgt_fuzz 00:21:56.744 ************************************ 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:21:56.744 * Looking for test storage... 00:21:56.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/fuzz 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@11 -- # iscsitestinit 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@13 -- # '[' -z 10.0.0.1 ']' 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@18 -- # '[' -z 10.0.0.2 ']' 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@23 -- # timing_enter iscsi_fuzz_test 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@25 -- # MALLOC_BDEV_SIZE=64 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@26 -- # MALLOC_BLOCK_SIZE=4096 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@28 -- # TEST_TIMEOUT=1200 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@31 -- # for i in "$@" 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@32 -- # case "$i" in 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@34 -- # TEST_TIMEOUT=30 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@39 -- # timing_enter start_iscsi_tgt 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@42 -- # iscsipid=79366 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --disable-cpumask-locks --wait-for-rpc 00:21:56.744 Process iscsipid: 79366 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@43 -- # echo 'Process iscsipid: 79366' 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@45 -- # trap 'killprocess $iscsipid; exit 1' SIGINT SIGTERM EXIT 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@47 -- # waitforlisten 79366 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@831 -- # '[' -z 79366 ']' 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:56.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:56.744 09:04:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:58.117 09:04:04 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:58.117 09:04:04 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@864 -- # return 0 00:21:58.117 09:04:04 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@49 -- # rpc_cmd iscsi_set_options -o 60 -a 16 00:21:58.117 09:04:04 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.117 09:04:04 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:58.118 09:04:04 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.118 09:04:04 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@50 -- # rpc_cmd framework_start_init 00:21:58.118 09:04:04 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.118 09:04:04 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:59.054 iscsi_tgt is listening. Running tests... 00:21:59.054 09:04:05 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.054 09:04:05 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@51 -- # echo 'iscsi_tgt is listening. Running tests...' 00:21:59.054 09:04:05 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@52 -- # timing_exit start_iscsi_tgt 00:21:59.054 09:04:05 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:59.054 09:04:05 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:59.054 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@54 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:21:59.054 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.054 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:59.054 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.054 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@55 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:21:59.054 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.054 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:59.054 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.054 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@56 -- # rpc_cmd bdev_malloc_create 64 4096 00:21:59.054 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.054 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:59.054 Malloc0 00:21:59.054 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.054 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@57 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:21:59.054 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.054 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:59.055 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.055 09:04:06 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@58 -- # sleep 1 00:22:00.433 09:04:07 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@60 -- # trap 'killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.433 09:04:07 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/iscsi_fuzz/iscsi_fuzz -m 0xF0 -T 10.0.0.1 -t 30 00:22:32.512 pdu received after logout 00:22:32.512 Fuzzing completed. Shutting down the fuzz application. 00:22:32.512 00:22:32.512 device 0x6110000160c0 stats: Sent 13510 valid opcode PDUs, 122410 invalid opcode PDUs. 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@64 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:disk1 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@67 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@71 -- # killprocess 79366 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@950 -- # '[' -z 79366 ']' 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # kill -0 79366 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@955 -- # uname 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79366 00:22:32.512 killing process with pid 79366 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79366' 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@969 -- # kill 79366 00:22:32.512 09:04:38 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@974 -- # wait 79366 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@73 -- # iscsitestfini 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@75 -- # timing_exit iscsi_fuzz_test 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:35.043 00:22:35.043 real 0m38.050s 00:22:35.043 user 3m38.474s 00:22:35.043 sys 0m21.333s 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:35.043 ************************************ 00:22:35.043 END TEST iscsi_tgt_fuzz 00:22:35.043 ************************************ 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:35.043 09:04:41 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@44 -- # run_test iscsi_tgt_multiconnection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:22:35.043 09:04:41 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:35.043 09:04:41 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:35.043 09:04:41 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:22:35.043 ************************************ 00:22:35.043 START TEST iscsi_tgt_multiconnection 00:22:35.043 ************************************ 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:22:35.043 * Looking for test storage... 00:22:35.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:22:35.043 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@11 -- # iscsitestinit 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@18 -- # CONNECTION_NUMBER=30 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@40 -- # timing_enter start_iscsi_tgt 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@42 -- # iscsipid=79845 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@43 -- # echo 'iSCSI target launched. pid: 79845' 00:22:35.044 iSCSI target launched. pid: 79845 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@44 -- # trap 'remove_backends; iscsicleanup; killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@46 -- # waitforlisten 79845 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 79845 ']' 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:35.044 09:04:41 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.044 [2024-07-25 09:04:42.047802] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:35.044 [2024-07-25 09:04:42.047938] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79845 ] 00:22:35.303 [2024-07-25 09:04:42.214258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.561 [2024-07-25 09:04:42.532422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.819 09:04:42 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.819 09:04:42 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:22:35.819 09:04:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 128 00:22:36.078 09:04:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:22:37.451 09:04:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:22:37.451 09:04:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:38.016 09:04:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@50 -- # timing_exit start_iscsi_tgt 00:22:38.016 09:04:44 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.016 09:04:44 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:38.016 09:04:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:22:38.016 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:22:38.273 Creating an iSCSI target node. 00:22:38.273 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@55 -- # echo 'Creating an iSCSI target node.' 00:22:38.273 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs0 -c 1048576 00:22:38.531 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # ls_guid=80a0298d-caab-47a0-9bfe-aa96c4e9ce69 00:22:38.531 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@59 -- # get_lvs_free_mb 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 00:22:38.531 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1364 -- # local lvs_uuid=80a0298d-caab-47a0-9bfe-aa96c4e9ce69 00:22:38.531 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1365 -- # local lvs_info 00:22:38.531 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1366 -- # local fc 00:22:38.531 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1367 -- # local cs 00:22:38.531 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:38.789 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:22:38.789 { 00:22:38.789 "uuid": "80a0298d-caab-47a0-9bfe-aa96c4e9ce69", 00:22:38.789 "name": "lvs0", 00:22:38.789 "base_bdev": "Nvme0n1", 00:22:38.789 "total_data_clusters": 5099, 00:22:38.789 "free_clusters": 5099, 00:22:38.789 "block_size": 4096, 00:22:38.789 "cluster_size": 1048576 00:22:38.789 } 00:22:38.789 ]' 00:22:38.789 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="80a0298d-caab-47a0-9bfe-aa96c4e9ce69") .free_clusters' 00:22:38.789 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1369 -- # fc=5099 00:22:38.789 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="80a0298d-caab-47a0-9bfe-aa96c4e9ce69") .cluster_size' 00:22:38.789 5099 00:22:38.789 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1370 -- # cs=1048576 00:22:38.789 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1373 -- # free_mb=5099 00:22:38.789 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1374 -- # echo 5099 00:22:38.789 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@60 -- # lvol_bdev_size=169 00:22:38.789 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # seq 1 30 00:22:38.789 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:38.789 09:04:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_1 169 00:22:39.047 0377c2c6-6300-4100-8fec-d9efc6170504 00:22:39.047 09:04:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:39.047 09:04:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_2 169 00:22:39.306 e4bb7038-6aa0-45cc-b6e6-f7f23a15a40f 00:22:39.306 09:04:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:39.306 09:04:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_3 169 00:22:39.564 f81c471c-5bb0-4687-96de-3714179ee2c0 00:22:39.564 09:04:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:39.564 09:04:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_4 169 00:22:39.564 dac20a4d-28c9-45b3-84a9-e0f74cc37f94 00:22:39.564 09:04:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:39.564 09:04:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_5 169 00:22:39.823 6f479dd7-652d-454f-8f50-bfe90363f4d0 00:22:39.823 09:04:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:39.824 09:04:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_6 169 00:22:40.083 2b0943e8-602b-408f-bc2c-14369e41ef15 00:22:40.083 09:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:40.083 09:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_7 169 00:22:40.342 2c7389f7-3982-4111-b418-c3253956cc3e 00:22:40.342 09:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:40.342 09:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_8 169 00:22:40.602 62787ff3-5807-46bc-9e7a-632adacbc5a7 00:22:40.602 09:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:40.602 09:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_9 169 00:22:40.602 b6c3517d-c5a3-4bca-ab8e-53481a080b42 00:22:40.861 09:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:40.861 09:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_10 169 00:22:40.861 78723b1d-1828-43e2-a200-31571afc0d40 00:22:41.120 09:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:41.120 09:04:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_11 169 00:22:41.120 b194d033-67a7-4c5c-a724-1f1328da23dd 00:22:41.120 09:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:41.120 09:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_12 169 00:22:41.379 4307361d-7e4d-4646-b6ba-f3b3781205bd 00:22:41.379 09:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:41.379 09:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_13 169 00:22:41.640 892a30e6-0330-43cb-97ab-924ff368b9c3 00:22:41.640 09:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:41.640 09:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_14 169 00:22:41.640 e5538db4-9d8b-4bf3-a97c-74e567f15f6e 00:22:41.640 09:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:41.640 09:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_15 169 00:22:41.899 fd092222-6499-4f9b-bb62-4dc144e48a6e 00:22:41.899 09:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:41.899 09:04:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_16 169 00:22:42.158 bbb7b7e1-3623-4583-8888-b9820965d31c 00:22:42.158 09:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:42.158 09:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_17 169 00:22:42.417 eebe88fb-4440-4cd2-9006-d07cf65a3f3c 00:22:42.417 09:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:42.417 09:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_18 169 00:22:42.417 5b134293-a608-4816-a56b-10ed476cd9fa 00:22:42.417 09:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:42.417 09:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_19 169 00:22:42.677 67620356-18df-47e6-b067-fe49d2a0ac02 00:22:42.677 09:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:42.677 09:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_20 169 00:22:42.935 1cebc384-93a3-4d72-86f1-30b281a01d50 00:22:42.936 09:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:42.936 09:04:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_21 169 00:22:43.194 e5c2fad1-9ea0-4531-a17a-21439235fceb 00:22:43.194 09:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:43.194 09:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_22 169 00:22:43.194 72a680cd-a029-4ca0-bc3a-6f3fa0dfadab 00:22:43.194 09:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:43.194 09:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_23 169 00:22:43.453 43178e56-0edc-40dd-b5cd-18044cf0deb0 00:22:43.453 09:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:43.453 09:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_24 169 00:22:43.712 deaee400-a285-4dcd-acdf-4a11d2a9daef 00:22:43.712 09:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:43.712 09:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_25 169 00:22:43.971 e896561c-f592-4b69-b1d3-202baac7e074 00:22:43.971 09:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:43.971 09:04:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_26 169 00:22:43.971 2a4d141d-e99e-4926-bd8e-83bf800f04f3 00:22:43.971 09:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:43.971 09:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_27 169 00:22:44.230 0a25541c-1f00-4d23-9ce0-5d3fef147b94 00:22:44.230 09:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:44.230 09:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_28 169 00:22:44.504 7a142bee-791a-424e-b9bf-091f32f0998c 00:22:44.504 09:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:44.504 09:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_29 169 00:22:44.766 87da21d5-88eb-430d-93eb-c06d945b2880 00:22:44.766 09:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:44.766 09:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80a0298d-caab-47a0-9bfe-aa96c4e9ce69 lbd_30 169 00:22:44.766 45dcbc4f-c1a0-43c7-be9a-d5ef94c6cacd 00:22:44.766 09:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # seq 1 30 00:22:45.025 09:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:45.025 09:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_1:0 00:22:45.025 09:04:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias lvs0/lbd_1:0 1:2 256 -d 00:22:45.025 09:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:45.025 09:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_2:0 00:22:45.025 09:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias lvs0/lbd_2:0 1:2 256 -d 00:22:45.285 09:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:45.285 09:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_3:0 00:22:45.285 09:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias lvs0/lbd_3:0 1:2 256 -d 00:22:45.544 09:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:45.544 09:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_4:0 00:22:45.544 09:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias lvs0/lbd_4:0 1:2 256 -d 00:22:45.803 09:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:45.803 09:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_5:0 00:22:45.803 09:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias lvs0/lbd_5:0 1:2 256 -d 00:22:45.803 09:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:45.803 09:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_6:0 00:22:45.803 09:04:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias lvs0/lbd_6:0 1:2 256 -d 00:22:46.062 09:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:46.062 09:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_7:0 00:22:46.062 09:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias lvs0/lbd_7:0 1:2 256 -d 00:22:46.321 09:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:46.321 09:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_8:0 00:22:46.321 09:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias lvs0/lbd_8:0 1:2 256 -d 00:22:46.579 09:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:46.579 09:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_9:0 00:22:46.579 09:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias lvs0/lbd_9:0 1:2 256 -d 00:22:46.579 09:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:46.579 09:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_10:0 00:22:46.579 09:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias lvs0/lbd_10:0 1:2 256 -d 00:22:46.838 09:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:46.838 09:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_11:0 00:22:46.838 09:04:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target11 Target11_alias lvs0/lbd_11:0 1:2 256 -d 00:22:47.097 09:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:47.097 09:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_12:0 00:22:47.097 09:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target12 Target12_alias lvs0/lbd_12:0 1:2 256 -d 00:22:47.356 09:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:47.356 09:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_13:0 00:22:47.356 09:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target13 Target13_alias lvs0/lbd_13:0 1:2 256 -d 00:22:47.356 09:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:47.356 09:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_14:0 00:22:47.356 09:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target14 Target14_alias lvs0/lbd_14:0 1:2 256 -d 00:22:47.615 09:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:47.615 09:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_15:0 00:22:47.615 09:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target15 Target15_alias lvs0/lbd_15:0 1:2 256 -d 00:22:47.873 09:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:47.873 09:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_16:0 00:22:47.873 09:04:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target16 Target16_alias lvs0/lbd_16:0 1:2 256 -d 00:22:48.132 09:04:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:48.132 09:04:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_17:0 00:22:48.132 09:04:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target17 Target17_alias lvs0/lbd_17:0 1:2 256 -d 00:22:48.132 09:04:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:48.132 09:04:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_18:0 00:22:48.132 09:04:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target18 Target18_alias lvs0/lbd_18:0 1:2 256 -d 00:22:48.391 09:04:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:48.391 09:04:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_19:0 00:22:48.391 09:04:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target19 Target19_alias lvs0/lbd_19:0 1:2 256 -d 00:22:48.650 09:04:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:48.650 09:04:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_20:0 00:22:48.650 09:04:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target20 Target20_alias lvs0/lbd_20:0 1:2 256 -d 00:22:48.909 09:04:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:48.909 09:04:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_21:0 00:22:48.909 09:04:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target21 Target21_alias lvs0/lbd_21:0 1:2 256 -d 00:22:49.168 09:04:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:49.168 09:04:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_22:0 00:22:49.168 09:04:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target22 Target22_alias lvs0/lbd_22:0 1:2 256 -d 00:22:49.168 09:04:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:49.168 09:04:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_23:0 00:22:49.168 09:04:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target23 Target23_alias lvs0/lbd_23:0 1:2 256 -d 00:22:49.426 09:04:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:49.426 09:04:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_24:0 00:22:49.426 09:04:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target24 Target24_alias lvs0/lbd_24:0 1:2 256 -d 00:22:49.685 09:04:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:49.685 09:04:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_25:0 00:22:49.685 09:04:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target25 Target25_alias lvs0/lbd_25:0 1:2 256 -d 00:22:49.943 09:04:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:49.943 09:04:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_26:0 00:22:49.943 09:04:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target26 Target26_alias lvs0/lbd_26:0 1:2 256 -d 00:22:49.943 09:04:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:49.943 09:04:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_27:0 00:22:49.943 09:04:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target27 Target27_alias lvs0/lbd_27:0 1:2 256 -d 00:22:50.201 09:04:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:50.201 09:04:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_28:0 00:22:50.201 09:04:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target28 Target28_alias lvs0/lbd_28:0 1:2 256 -d 00:22:50.459 09:04:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:50.459 09:04:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_29:0 00:22:50.459 09:04:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target29 Target29_alias lvs0/lbd_29:0 1:2 256 -d 00:22:50.719 09:04:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:50.719 09:04:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_30:0 00:22:50.719 09:04:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target30 Target30_alias lvs0/lbd_30:0 1:2 256 -d 00:22:50.719 09:04:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@69 -- # sleep 1 00:22:52.097 Logging into iSCSI target. 00:22:52.097 09:04:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@71 -- # echo 'Logging into iSCSI target.' 00:22:52.097 09:04:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@72 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target16 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target17 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target18 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target19 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target20 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target21 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target22 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target23 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target24 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target25 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target26 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target27 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target28 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target29 00:22:52.097 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target30 00:22:52.097 09:04:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@73 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:22:52.097 [2024-07-25 09:04:58.909008] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 [2024-07-25 09:04:58.920298] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 [2024-07-25 09:04:58.930395] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 [2024-07-25 09:04:58.945033] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 [2024-07-25 09:04:58.969299] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 [2024-07-25 09:04:58.989521] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 [2024-07-25 09:04:58.999643] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 [2024-07-25 09:04:59.016942] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 [2024-07-25 09:04:59.035652] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 [2024-07-25 09:04:59.054812] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 [2024-07-25 09:04:59.079902] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 [2024-07-25 09:04:59.098653] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 [2024-07-25 09:04:59.112629] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:22:52.097 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:22:52.097 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:22:52.097 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:22:52.097 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:22:52.097 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:22:52.097 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:22:52.097 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:22:52.097 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:22:52.097 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:22:52.097 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:22:52.097 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:22:52.097 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:22:52.097 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:22:52.097 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:22:52.097 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, por[2024-07-25 09:04:59.126759] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 [2024-07-25 09:04:59.150024] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 [2024-07-25 09:04:59.170749] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 [2024-07-25 09:04:59.210635] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.097 [2024-07-25 09:04:59.215560] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.356 [2024-07-25 09:04:59.233730] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.356 [2024-07-25 09:04:59.255852] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.356 [2024-07-25 09:04:59.266869] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.356 [2024-07-25 09:04:59.289847] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.356 [2024-07-25 09:04:59.317178] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.356 [2024-07-25 09:04:59.325216] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.356 [2024-07-25 09:04:59.339200] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.356 [2024-07-25 09:04:59.355543] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.356 [2024-07-25 09:04:59.380855] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.356 [2024-07-25 09:04:59.397459] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.356 [2024-07-25 09:04:59.417303] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.356 tal: 10.0.0.1,3260] successful. 00:22:52.356 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:22:52.356 Login to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:22:52.356 Login to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:22:52.356 Login to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:22:52.356 Login to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:22:52.356 Login to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:22:52.357 Login to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:22:52.357 Login to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:22:52.357 Login to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:22:52.357 Login to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:22:52.357 Login to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:22:52.357 Login to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:22:52.357 Login to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:22:52.357 Login to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:22:52.357 Login to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:22:52.357 Login to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:22:52.357 09:04:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@74 -- # waitforiscsidevices 30 00:22:52.357 09:04:59 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@116 -- # local num=30 00:22:52.357 09:04:59 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:22:52.357 09:04:59 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:22:52.357 09:04:59 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:22:52.357 09:04:59 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:22:52.357 [2024-07-25 09:04:59.425231] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:52.357 09:04:59 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # n=30 00:22:52.357 09:04:59 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@120 -- # '[' 30 -ne 30 ']' 00:22:52.357 09:04:59 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@123 -- # return 0 00:22:52.357 Running FIO 00:22:52.357 09:04:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@76 -- # echo 'Running FIO' 00:22:52.357 09:04:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 64 -t randrw -r 5 00:22:52.614 [global] 00:22:52.614 thread=1 00:22:52.614 invalidate=1 00:22:52.614 rw=randrw 00:22:52.614 time_based=1 00:22:52.614 runtime=5 00:22:52.614 ioengine=libaio 00:22:52.614 direct=1 00:22:52.614 bs=131072 00:22:52.614 iodepth=64 00:22:52.614 norandommap=1 00:22:52.614 numjobs=1 00:22:52.614 00:22:52.614 [job0] 00:22:52.614 filename=/dev/sda 00:22:52.614 [job1] 00:22:52.614 filename=/dev/sdb 00:22:52.614 [job2] 00:22:52.614 filename=/dev/sdc 00:22:52.614 [job3] 00:22:52.614 filename=/dev/sdd 00:22:52.614 [job4] 00:22:52.614 filename=/dev/sde 00:22:52.614 [job5] 00:22:52.614 filename=/dev/sdf 00:22:52.614 [job6] 00:22:52.614 filename=/dev/sdg 00:22:52.614 [job7] 00:22:52.614 filename=/dev/sdh 00:22:52.614 [job8] 00:22:52.614 filename=/dev/sdi 00:22:52.614 [job9] 00:22:52.614 filename=/dev/sdj 00:22:52.614 [job10] 00:22:52.614 filename=/dev/sdk 00:22:52.614 [job11] 00:22:52.614 filename=/dev/sdl 00:22:52.614 [job12] 00:22:52.614 filename=/dev/sdm 00:22:52.614 [job13] 00:22:52.614 filename=/dev/sdn 00:22:52.614 [job14] 00:22:52.614 filename=/dev/sdo 00:22:52.614 [job15] 00:22:52.614 filename=/dev/sdp 00:22:52.614 [job16] 00:22:52.614 filename=/dev/sdq 00:22:52.614 [job17] 00:22:52.614 filename=/dev/sdr 00:22:52.614 [job18] 00:22:52.614 filename=/dev/sds 00:22:52.614 [job19] 00:22:52.614 filename=/dev/sdt 00:22:52.614 [job20] 00:22:52.614 filename=/dev/sdu 00:22:52.614 [job21] 00:22:52.614 filename=/dev/sdv 00:22:52.614 [job22] 00:22:52.614 filename=/dev/sdw 00:22:52.614 [job23] 00:22:52.614 filename=/dev/sdx 00:22:52.614 [job24] 00:22:52.614 filename=/dev/sdy 00:22:52.614 [job25] 00:22:52.614 filename=/dev/sdz 00:22:52.614 [job26] 00:22:52.614 filename=/dev/sdaa 00:22:52.614 [job27] 00:22:52.614 filename=/dev/sdab 00:22:52.614 [job28] 00:22:52.614 filename=/dev/sdac 00:22:52.614 [job29] 00:22:52.614 filename=/dev/sdad 00:22:53.182 queue_depth set to 113 (sda) 00:22:53.182 queue_depth set to 113 (sdb) 00:22:53.182 queue_depth set to 113 (sdc) 00:22:53.182 queue_depth set to 113 (sdd) 00:22:53.182 queue_depth set to 113 (sde) 00:22:53.182 queue_depth set to 113 (sdf) 00:22:53.182 queue_depth set to 113 (sdg) 00:22:53.182 queue_depth set to 113 (sdh) 00:22:53.441 queue_depth set to 113 (sdi) 00:22:53.441 queue_depth set to 113 (sdj) 00:22:53.441 queue_depth set to 113 (sdk) 00:22:53.441 queue_depth set to 113 (sdl) 00:22:53.441 queue_depth set to 113 (sdm) 00:22:53.441 queue_depth set to 113 (sdn) 00:22:53.441 queue_depth set to 113 (sdo) 00:22:53.441 queue_depth set to 113 (sdp) 00:22:53.441 queue_depth set to 113 (sdq) 00:22:53.441 queue_depth set to 113 (sdr) 00:22:53.441 queue_depth set to 113 (sds) 00:22:53.441 queue_depth set to 113 (sdt) 00:22:53.700 queue_depth set to 113 (sdu) 00:22:53.700 queue_depth set to 113 (sdv) 00:22:53.700 queue_depth set to 113 (sdw) 00:22:53.700 queue_depth set to 113 (sdx) 00:22:53.700 queue_depth set to 113 (sdy) 00:22:53.700 queue_depth set to 113 (sdz) 00:22:53.700 queue_depth set to 113 (sdaa) 00:22:53.700 queue_depth set to 113 (sdab) 00:22:53.700 queue_depth set to 113 (sdac) 00:22:53.700 queue_depth set to 113 (sdad) 00:22:53.958 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.958 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.958 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.958 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.958 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.958 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.958 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.958 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.958 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.958 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.958 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.958 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job16: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job17: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job18: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job19: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job20: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job21: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job22: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job23: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job24: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job25: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job26: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job27: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job28: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 job29: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:22:53.959 fio-3.35 00:22:53.959 Starting 30 threads 00:22:53.959 [2024-07-25 09:05:00.945929] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.950123] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.954303] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.958504] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.961874] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.965089] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.968309] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.971032] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.974473] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.977939] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.980467] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.982833] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.985028] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.987247] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.989543] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.991437] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.993484] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.995453] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.997449] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:00.999400] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:01.001251] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:01.003091] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:01.004828] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:01.006687] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:01.008492] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:01.010390] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:01.012075] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:01.013850] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:01.015650] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:53.959 [2024-07-25 09:05:01.017380] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.537 [2024-07-25 09:05:06.847791] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.537 [2024-07-25 09:05:06.863922] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.537 [2024-07-25 09:05:06.868090] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.537 [2024-07-25 09:05:06.871835] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.537 [2024-07-25 09:05:06.876077] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.537 [2024-07-25 09:05:06.879403] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.537 [2024-07-25 09:05:06.882458] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.537 [2024-07-25 09:05:06.885694] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.537 [2024-07-25 09:05:06.888354] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.537 [2024-07-25 09:05:06.890786] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.537 [2024-07-25 09:05:06.893420] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.537 [2024-07-25 09:05:06.895920] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.537 [2024-07-25 09:05:06.901524] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.537 [2024-07-25 09:05:06.903681] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.538 [2024-07-25 09:05:06.905891] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.538 00:23:00.538 job0: (groupid=0, jobs=1): err= 0: pid=80754: Thu Jul 25 09:05:06 2024 00:23:00.538 read: IOPS=73, BW=9467KiB/s (9694kB/s)(50.4MiB/5449msec) 00:23:00.538 slat (usec): min=8, max=996, avg=69.48, stdev=121.69 00:23:00.538 clat (msec): min=12, max=492, avg=63.97, stdev=55.92 00:23:00.538 lat (msec): min=12, max=492, avg=64.04, stdev=55.91 00:23:00.538 clat percentiles (msec): 00:23:00.538 | 1.00th=[ 17], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 49], 00:23:00.538 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.538 | 70.00th=[ 53], 80.00th=[ 55], 90.00th=[ 81], 95.00th=[ 127], 00:23:00.538 | 99.00th=[ 456], 99.50th=[ 468], 99.90th=[ 493], 99.95th=[ 493], 00:23:00.538 | 99.99th=[ 493] 00:23:00.538 bw ( KiB/s): min= 6898, max=15872, per=3.41%, avg=10185.90, stdev=2981.04, samples=10 00:23:00.538 iops : min= 53, max= 124, avg=79.40, stdev=23.50, samples=10 00:23:00.538 write: IOPS=77, BW=9937KiB/s (10.2MB/s)(52.9MiB/5449msec); 0 zone resets 00:23:00.538 slat (usec): min=15, max=2009, avg=93.88, stdev=184.58 00:23:00.538 clat (msec): min=152, max=1206, avg=762.05, stdev=121.17 00:23:00.538 lat (msec): min=152, max=1206, avg=762.14, stdev=121.19 00:23:00.538 clat percentiles (msec): 00:23:00.538 | 1.00th=[ 292], 5.00th=[ 518], 10.00th=[ 693], 20.00th=[ 743], 00:23:00.538 | 30.00th=[ 760], 40.00th=[ 768], 50.00th=[ 776], 60.00th=[ 776], 00:23:00.538 | 70.00th=[ 785], 80.00th=[ 793], 90.00th=[ 818], 95.00th=[ 936], 00:23:00.538 | 99.00th=[ 1133], 99.50th=[ 1167], 99.90th=[ 1200], 99.95th=[ 1200], 00:23:00.538 | 99.99th=[ 1200] 00:23:00.538 bw ( KiB/s): min= 3584, max=10240, per=3.13%, avg=9340.00, stdev=2036.82, samples=10 00:23:00.538 iops : min= 28, max= 80, avg=72.80, stdev=15.86, samples=10 00:23:00.538 lat (msec) : 20=0.61%, 50=16.46%, 100=27.72%, 250=3.63%, 500=2.78% 00:23:00.538 lat (msec) : 750=10.05%, 1000=36.56%, 2000=2.18% 00:23:00.538 cpu : usr=0.17%, sys=0.81%, ctx=572, majf=0, minf=1 00:23:00.538 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:23:00.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.538 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.538 issued rwts: total=403,423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.538 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.538 job1: (groupid=0, jobs=1): err= 0: pid=80755: Thu Jul 25 09:05:06 2024 00:23:00.538 read: IOPS=68, BW=8713KiB/s (8922kB/s)(46.5MiB/5465msec) 00:23:00.538 slat (usec): min=8, max=3182, avg=44.38, stdev=172.81 00:23:00.538 clat (msec): min=16, max=476, avg=62.51, stdev=38.36 00:23:00.538 lat (msec): min=20, max=476, avg=62.55, stdev=38.35 00:23:00.538 clat percentiles (msec): 00:23:00.538 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.538 | 30.00th=[ 51], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.538 | 70.00th=[ 54], 80.00th=[ 56], 90.00th=[ 101], 95.00th=[ 146], 00:23:00.538 | 99.00th=[ 226], 99.50th=[ 249], 99.90th=[ 477], 99.95th=[ 477], 00:23:00.538 | 99.99th=[ 477] 00:23:00.538 bw ( KiB/s): min= 6400, max=14592, per=3.17%, avg=9465.50, stdev=2486.66, samples=10 00:23:00.538 iops : min= 50, max= 114, avg=73.70, stdev=19.29, samples=10 00:23:00.538 write: IOPS=78, BW=9.77MiB/s (10.2MB/s)(53.4MiB/5465msec); 0 zone resets 00:23:00.538 slat (usec): min=14, max=650, avg=45.43, stdev=34.41 00:23:00.538 clat (msec): min=207, max=1204, avg=762.72, stdev=121.23 00:23:00.538 lat (msec): min=207, max=1205, avg=762.76, stdev=121.23 00:23:00.538 clat percentiles (msec): 00:23:00.538 | 1.00th=[ 342], 5.00th=[ 514], 10.00th=[ 667], 20.00th=[ 735], 00:23:00.538 | 30.00th=[ 751], 40.00th=[ 760], 50.00th=[ 776], 60.00th=[ 785], 00:23:00.538 | 70.00th=[ 793], 80.00th=[ 802], 90.00th=[ 818], 95.00th=[ 944], 00:23:00.538 | 99.00th=[ 1150], 99.50th=[ 1167], 99.90th=[ 1200], 99.95th=[ 1200], 00:23:00.538 | 99.99th=[ 1200] 00:23:00.538 bw ( KiB/s): min= 3328, max=10240, per=3.12%, avg=9338.00, stdev=2127.08, samples=10 00:23:00.538 iops : min= 26, max= 80, avg=72.70, stdev=16.54, samples=10 00:23:00.538 lat (msec) : 20=0.13%, 50=15.39%, 100=26.16%, 250=5.13%, 500=2.00% 00:23:00.538 lat (msec) : 750=12.89%, 1000=36.17%, 2000=2.13% 00:23:00.538 cpu : usr=0.20%, sys=0.48%, ctx=489, majf=0, minf=1 00:23:00.538 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:23:00.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.538 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.538 issued rwts: total=372,427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.538 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.538 job2: (groupid=0, jobs=1): err= 0: pid=80756: Thu Jul 25 09:05:06 2024 00:23:00.538 read: IOPS=80, BW=10.1MiB/s (10.6MB/s)(55.1MiB/5455msec) 00:23:00.538 slat (usec): min=7, max=924, avg=54.69, stdev=96.50 00:23:00.538 clat (msec): min=13, max=485, avg=62.01, stdev=51.58 00:23:00.538 lat (msec): min=13, max=485, avg=62.06, stdev=51.58 00:23:00.538 clat percentiles (msec): 00:23:00.538 | 1.00th=[ 18], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 49], 00:23:00.538 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.538 | 70.00th=[ 53], 80.00th=[ 55], 90.00th=[ 63], 95.00th=[ 116], 00:23:00.538 | 99.00th=[ 251], 99.50th=[ 460], 99.90th=[ 485], 99.95th=[ 485], 00:23:00.538 | 99.99th=[ 485] 00:23:00.538 bw ( KiB/s): min= 8431, max=14080, per=3.74%, avg=11182.90, stdev=2196.33, samples=10 00:23:00.538 iops : min= 65, max= 110, avg=87.20, stdev=17.23, samples=10 00:23:00.538 write: IOPS=77, BW=9926KiB/s (10.2MB/s)(52.9MiB/5455msec); 0 zone resets 00:23:00.538 slat (usec): min=11, max=9529, avg=85.26, stdev=468.82 00:23:00.538 clat (msec): min=217, max=1213, avg=759.32, stdev=120.58 00:23:00.538 lat (msec): min=217, max=1213, avg=759.41, stdev=120.59 00:23:00.538 clat percentiles (msec): 00:23:00.538 | 1.00th=[ 351], 5.00th=[ 550], 10.00th=[ 651], 20.00th=[ 726], 00:23:00.538 | 30.00th=[ 743], 40.00th=[ 760], 50.00th=[ 768], 60.00th=[ 776], 00:23:00.538 | 70.00th=[ 785], 80.00th=[ 793], 90.00th=[ 835], 95.00th=[ 936], 00:23:00.538 | 99.00th=[ 1150], 99.50th=[ 1183], 99.90th=[ 1217], 99.95th=[ 1217], 00:23:00.538 | 99.99th=[ 1217] 00:23:00.538 bw ( KiB/s): min= 3328, max=10496, per=3.12%, avg=9314.40, stdev=2120.66, samples=10 00:23:00.538 iops : min= 26, max= 82, avg=72.60, stdev=16.51, samples=10 00:23:00.538 lat (msec) : 20=0.58%, 50=17.25%, 100=29.98%, 250=2.66%, 500=2.20% 00:23:00.538 lat (msec) : 750=16.44%, 1000=28.70%, 2000=2.20% 00:23:00.538 cpu : usr=0.18%, sys=0.59%, ctx=573, majf=0, minf=1 00:23:00.538 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.7% 00:23:00.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.538 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.538 issued rwts: total=441,423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.538 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.538 job3: (groupid=0, jobs=1): err= 0: pid=80757: Thu Jul 25 09:05:06 2024 00:23:00.538 read: IOPS=76, BW=9781KiB/s (10.0MB/s)(52.0MiB/5444msec) 00:23:00.538 slat (usec): min=7, max=1081, avg=42.87, stdev=79.56 00:23:00.538 clat (msec): min=34, max=460, avg=62.75, stdev=40.32 00:23:00.538 lat (msec): min=34, max=460, avg=62.79, stdev=40.31 00:23:00.538 clat percentiles (msec): 00:23:00.538 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.538 | 30.00th=[ 51], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.538 | 70.00th=[ 54], 80.00th=[ 56], 90.00th=[ 96], 95.00th=[ 155], 00:23:00.538 | 99.00th=[ 188], 99.50th=[ 207], 99.90th=[ 460], 99.95th=[ 460], 00:23:00.538 | 99.99th=[ 460] 00:23:00.538 bw ( KiB/s): min= 6387, max=16128, per=3.54%, avg=10571.50, stdev=2654.62, samples=10 00:23:00.538 iops : min= 49, max= 126, avg=82.50, stdev=20.90, samples=10 00:23:00.538 write: IOPS=78, BW=9.80MiB/s (10.3MB/s)(53.4MiB/5444msec); 0 zone resets 00:23:00.538 slat (usec): min=16, max=7764, avg=69.52, stdev=374.58 00:23:00.538 clat (msec): min=211, max=1189, avg=752.34, stdev=120.77 00:23:00.538 lat (msec): min=219, max=1189, avg=752.41, stdev=120.70 00:23:00.538 clat percentiles (msec): 00:23:00.538 | 1.00th=[ 334], 5.00th=[ 498], 10.00th=[ 634], 20.00th=[ 735], 00:23:00.538 | 30.00th=[ 751], 40.00th=[ 760], 50.00th=[ 768], 60.00th=[ 776], 00:23:00.538 | 70.00th=[ 785], 80.00th=[ 793], 90.00th=[ 802], 95.00th=[ 927], 00:23:00.538 | 99.00th=[ 1116], 99.50th=[ 1167], 99.90th=[ 1183], 99.95th=[ 1183], 00:23:00.538 | 99.99th=[ 1183] 00:23:00.538 bw ( KiB/s): min= 3840, max=10240, per=3.13%, avg=9367.50, stdev=1956.81, samples=10 00:23:00.538 iops : min= 30, max= 80, avg=73.10, stdev=15.25, samples=10 00:23:00.538 lat (msec) : 50=15.54%, 100=29.18%, 250=4.74%, 500=2.61%, 750=12.81% 00:23:00.538 lat (msec) : 1000=33.33%, 2000=1.78% 00:23:00.538 cpu : usr=0.24%, sys=0.55%, ctx=510, majf=0, minf=1 00:23:00.538 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.5% 00:23:00.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.538 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.538 issued rwts: total=416,427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.538 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.538 job4: (groupid=0, jobs=1): err= 0: pid=80758: Thu Jul 25 09:05:06 2024 00:23:00.538 read: IOPS=76, BW=9743KiB/s (9977kB/s)(51.8MiB/5439msec) 00:23:00.538 slat (usec): min=8, max=406, avg=36.62, stdev=29.04 00:23:00.538 clat (msec): min=37, max=475, avg=67.52, stdev=49.84 00:23:00.538 lat (msec): min=37, max=475, avg=67.56, stdev=49.85 00:23:00.538 clat percentiles (msec): 00:23:00.538 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 50], 00:23:00.538 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.538 | 70.00th=[ 54], 80.00th=[ 58], 90.00th=[ 118], 95.00th=[ 155], 00:23:00.538 | 99.00th=[ 222], 99.50th=[ 451], 99.90th=[ 477], 99.95th=[ 477], 00:23:00.538 | 99.99th=[ 477] 00:23:00.538 bw ( KiB/s): min= 7680, max=19968, per=3.51%, avg=10496.00, stdev=3563.62, samples=10 00:23:00.538 iops : min= 60, max= 156, avg=82.00, stdev=27.84, samples=10 00:23:00.538 write: IOPS=77, BW=9978KiB/s (10.2MB/s)(53.0MiB/5439msec); 0 zone resets 00:23:00.538 slat (usec): min=11, max=2046, avg=49.08, stdev=102.53 00:23:00.538 clat (msec): min=208, max=1192, avg=753.74, stdev=119.13 00:23:00.539 lat (msec): min=208, max=1192, avg=753.79, stdev=119.13 00:23:00.539 clat percentiles (msec): 00:23:00.539 | 1.00th=[ 334], 5.00th=[ 514], 10.00th=[ 667], 20.00th=[ 726], 00:23:00.539 | 30.00th=[ 751], 40.00th=[ 760], 50.00th=[ 768], 60.00th=[ 776], 00:23:00.539 | 70.00th=[ 785], 80.00th=[ 793], 90.00th=[ 802], 95.00th=[ 927], 00:23:00.539 | 99.00th=[ 1150], 99.50th=[ 1167], 99.90th=[ 1200], 99.95th=[ 1200], 00:23:00.539 | 99.99th=[ 1200] 00:23:00.539 bw ( KiB/s): min= 3584, max=10240, per=3.13%, avg=9344.00, stdev=2038.20, samples=10 00:23:00.539 iops : min= 28, max= 80, avg=73.00, stdev=15.92, samples=10 00:23:00.539 lat (msec) : 50=14.08%, 100=28.52%, 250=6.68%, 500=2.39%, 750=13.01% 00:23:00.539 lat (msec) : 1000=33.53%, 2000=1.79% 00:23:00.539 cpu : usr=0.15%, sys=0.74%, ctx=507, majf=0, minf=1 00:23:00.539 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:23:00.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.539 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.539 issued rwts: total=414,424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.539 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.539 job5: (groupid=0, jobs=1): err= 0: pid=80759: Thu Jul 25 09:05:06 2024 00:23:00.539 read: IOPS=81, BW=10.2MiB/s (10.7MB/s)(55.5MiB/5443msec) 00:23:00.539 slat (usec): min=8, max=1136, avg=39.53, stdev=68.14 00:23:00.539 clat (msec): min=36, max=470, avg=62.86, stdev=45.62 00:23:00.539 lat (msec): min=36, max=470, avg=62.90, stdev=45.61 00:23:00.539 clat percentiles (msec): 00:23:00.539 | 1.00th=[ 38], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.539 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.539 | 70.00th=[ 53], 80.00th=[ 56], 90.00th=[ 84], 95.00th=[ 148], 00:23:00.539 | 99.00th=[ 207], 99.50th=[ 472], 99.90th=[ 472], 99.95th=[ 472], 00:23:00.539 | 99.99th=[ 472] 00:23:00.539 bw ( KiB/s): min= 7680, max=16128, per=3.77%, avg=11262.10, stdev=2779.81, samples=10 00:23:00.539 iops : min= 60, max= 126, avg=87.90, stdev=21.79, samples=10 00:23:00.539 write: IOPS=78, BW=9.78MiB/s (10.3MB/s)(53.2MiB/5443msec); 0 zone resets 00:23:00.539 slat (usec): min=13, max=7680, avg=68.22, stdev=376.19 00:23:00.539 clat (msec): min=217, max=1199, avg=749.50, stdev=119.02 00:23:00.539 lat (msec): min=222, max=1199, avg=749.57, stdev=118.94 00:23:00.539 clat percentiles (msec): 00:23:00.539 | 1.00th=[ 334], 5.00th=[ 510], 10.00th=[ 642], 20.00th=[ 735], 00:23:00.539 | 30.00th=[ 751], 40.00th=[ 760], 50.00th=[ 760], 60.00th=[ 768], 00:23:00.539 | 70.00th=[ 776], 80.00th=[ 785], 90.00th=[ 802], 95.00th=[ 911], 00:23:00.539 | 99.00th=[ 1133], 99.50th=[ 1150], 99.90th=[ 1200], 99.95th=[ 1200], 00:23:00.539 | 99.99th=[ 1200] 00:23:00.539 bw ( KiB/s): min= 3840, max=10240, per=3.13%, avg=9367.50, stdev=1956.81, samples=10 00:23:00.539 iops : min= 30, max= 80, avg=73.10, stdev=15.25, samples=10 00:23:00.539 lat (msec) : 50=17.93%, 100=28.74%, 250=4.37%, 500=2.30%, 750=13.91% 00:23:00.539 lat (msec) : 1000=31.15%, 2000=1.61% 00:23:00.539 cpu : usr=0.13%, sys=0.53%, ctx=561, majf=0, minf=1 00:23:00.539 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.7%, >=64=92.8% 00:23:00.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.539 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.539 issued rwts: total=444,426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.539 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.539 job6: (groupid=0, jobs=1): err= 0: pid=80769: Thu Jul 25 09:05:06 2024 00:23:00.539 read: IOPS=77, BW=9980KiB/s (10.2MB/s)(52.9MiB/5425msec) 00:23:00.539 slat (usec): min=7, max=579, avg=40.87, stdev=34.28 00:23:00.539 clat (msec): min=35, max=464, avg=64.02, stdev=46.90 00:23:00.539 lat (msec): min=35, max=464, avg=64.06, stdev=46.89 00:23:00.539 clat percentiles (msec): 00:23:00.539 | 1.00th=[ 37], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.539 | 30.00th=[ 51], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.539 | 70.00th=[ 54], 80.00th=[ 57], 90.00th=[ 97], 95.00th=[ 138], 00:23:00.539 | 99.00th=[ 192], 99.50th=[ 439], 99.90th=[ 464], 99.95th=[ 464], 00:23:00.539 | 99.99th=[ 464] 00:23:00.539 bw ( KiB/s): min= 6144, max=15360, per=3.58%, avg=10697.30, stdev=3228.65, samples=10 00:23:00.539 iops : min= 48, max= 120, avg=83.40, stdev=25.35, samples=10 00:23:00.539 write: IOPS=78, BW=9.82MiB/s (10.3MB/s)(53.2MiB/5425msec); 0 zone resets 00:23:00.539 slat (usec): min=9, max=124, avg=47.83, stdev=20.74 00:23:00.539 clat (msec): min=201, max=1180, avg=750.23, stdev=118.13 00:23:00.539 lat (msec): min=201, max=1180, avg=750.27, stdev=118.14 00:23:00.539 clat percentiles (msec): 00:23:00.539 | 1.00th=[ 321], 5.00th=[ 506], 10.00th=[ 651], 20.00th=[ 726], 00:23:00.539 | 30.00th=[ 743], 40.00th=[ 760], 50.00th=[ 760], 60.00th=[ 768], 00:23:00.539 | 70.00th=[ 785], 80.00th=[ 793], 90.00th=[ 802], 95.00th=[ 877], 00:23:00.539 | 99.00th=[ 1116], 99.50th=[ 1150], 99.90th=[ 1183], 99.95th=[ 1183], 00:23:00.539 | 99.99th=[ 1183] 00:23:00.539 bw ( KiB/s): min= 3840, max=10240, per=3.14%, avg=9391.20, stdev=1959.68, samples=10 00:23:00.539 iops : min= 30, max= 80, avg=73.20, stdev=15.26, samples=10 00:23:00.539 lat (msec) : 50=15.78%, 100=29.09%, 250=4.83%, 500=2.59%, 750=14.61% 00:23:00.539 lat (msec) : 1000=31.33%, 2000=1.77% 00:23:00.539 cpu : usr=0.28%, sys=0.70%, ctx=494, majf=0, minf=1 00:23:00.539 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:23:00.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.539 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.539 issued rwts: total=423,426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.539 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.539 job7: (groupid=0, jobs=1): err= 0: pid=80809: Thu Jul 25 09:05:06 2024 00:23:00.539 read: IOPS=71, BW=9108KiB/s (9326kB/s)(48.5MiB/5453msec) 00:23:00.539 slat (usec): min=9, max=1756, avg=49.05, stdev=94.27 00:23:00.539 clat (msec): min=19, max=480, avg=63.54, stdev=41.84 00:23:00.539 lat (msec): min=19, max=480, avg=63.59, stdev=41.84 00:23:00.539 clat percentiles (msec): 00:23:00.539 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.539 | 30.00th=[ 51], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.539 | 70.00th=[ 54], 80.00th=[ 56], 90.00th=[ 99], 95.00th=[ 146], 00:23:00.539 | 99.00th=[ 207], 99.50th=[ 456], 99.90th=[ 481], 99.95th=[ 481], 00:23:00.539 | 99.99th=[ 481] 00:23:00.539 bw ( KiB/s): min= 6144, max=15584, per=3.30%, avg=9852.80, stdev=2647.93, samples=10 00:23:00.539 iops : min= 48, max= 121, avg=76.90, stdev=20.51, samples=10 00:23:00.539 write: IOPS=78, BW=10000KiB/s (10.2MB/s)(53.2MiB/5453msec); 0 zone resets 00:23:00.539 slat (usec): min=13, max=3016, avg=76.37, stdev=187.48 00:23:00.539 clat (msec): min=212, max=1207, avg=759.62, stdev=117.91 00:23:00.539 lat (msec): min=212, max=1208, avg=759.69, stdev=117.90 00:23:00.539 clat percentiles (msec): 00:23:00.539 | 1.00th=[ 338], 5.00th=[ 531], 10.00th=[ 676], 20.00th=[ 743], 00:23:00.539 | 30.00th=[ 751], 40.00th=[ 760], 50.00th=[ 768], 60.00th=[ 776], 00:23:00.539 | 70.00th=[ 785], 80.00th=[ 793], 90.00th=[ 810], 95.00th=[ 961], 00:23:00.539 | 99.00th=[ 1133], 99.50th=[ 1200], 99.90th=[ 1217], 99.95th=[ 1217], 00:23:00.539 | 99.99th=[ 1217] 00:23:00.539 bw ( KiB/s): min= 3321, max=10240, per=3.13%, avg=9343.30, stdev=2124.42, samples=10 00:23:00.539 iops : min= 25, max= 80, avg=72.90, stdev=16.89, samples=10 00:23:00.539 lat (msec) : 20=0.25%, 50=13.51%, 100=29.24%, 250=4.79%, 500=2.09% 00:23:00.539 lat (msec) : 750=11.43%, 1000=36.73%, 2000=1.97% 00:23:00.539 cpu : usr=0.35%, sys=0.53%, ctx=514, majf=0, minf=1 00:23:00.539 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:23:00.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.539 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.539 issued rwts: total=388,426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.539 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.539 job8: (groupid=0, jobs=1): err= 0: pid=80810: Thu Jul 25 09:05:06 2024 00:23:00.539 read: IOPS=69, BW=8954KiB/s (9169kB/s)(47.5MiB/5432msec) 00:23:00.539 slat (usec): min=8, max=808, avg=43.21, stdev=78.04 00:23:00.539 clat (msec): min=37, max=464, avg=67.92, stdev=51.60 00:23:00.539 lat (msec): min=37, max=464, avg=67.96, stdev=51.59 00:23:00.539 clat percentiles (msec): 00:23:00.539 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.539 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.539 | 70.00th=[ 54], 80.00th=[ 58], 90.00th=[ 118], 95.00th=[ 161], 00:23:00.539 | 99.00th=[ 439], 99.50th=[ 464], 99.90th=[ 464], 99.95th=[ 464], 00:23:00.539 | 99.99th=[ 464] 00:23:00.539 bw ( KiB/s): min= 5632, max=16896, per=3.22%, avg=9623.50, stdev=2987.70, samples=10 00:23:00.539 iops : min= 44, max= 132, avg=75.10, stdev=23.32, samples=10 00:23:00.539 write: IOPS=78, BW=9.78MiB/s (10.3MB/s)(53.1MiB/5432msec); 0 zone resets 00:23:00.539 slat (usec): min=13, max=1020, avg=65.06, stdev=104.15 00:23:00.539 clat (msec): min=206, max=1181, avg=755.94, stdev=122.22 00:23:00.539 lat (msec): min=206, max=1181, avg=756.01, stdev=122.24 00:23:00.539 clat percentiles (msec): 00:23:00.539 | 1.00th=[ 326], 5.00th=[ 514], 10.00th=[ 659], 20.00th=[ 735], 00:23:00.539 | 30.00th=[ 751], 40.00th=[ 760], 50.00th=[ 768], 60.00th=[ 776], 00:23:00.539 | 70.00th=[ 785], 80.00th=[ 802], 90.00th=[ 827], 95.00th=[ 927], 00:23:00.539 | 99.00th=[ 1150], 99.50th=[ 1167], 99.90th=[ 1183], 99.95th=[ 1183], 00:23:00.539 | 99.99th=[ 1183] 00:23:00.539 bw ( KiB/s): min= 3840, max=10240, per=3.13%, avg=9367.50, stdev=1956.81, samples=10 00:23:00.539 iops : min= 30, max= 80, avg=73.10, stdev=15.25, samples=10 00:23:00.539 lat (msec) : 50=13.54%, 100=27.58%, 250=5.96%, 500=2.48%, 750=14.16% 00:23:00.539 lat (msec) : 1000=34.66%, 2000=1.61% 00:23:00.539 cpu : usr=0.15%, sys=0.55%, ctx=568, majf=0, minf=1 00:23:00.539 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:23:00.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.539 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.539 issued rwts: total=380,425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.539 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.539 job9: (groupid=0, jobs=1): err= 0: pid=80811: Thu Jul 25 09:05:06 2024 00:23:00.539 read: IOPS=84, BW=10.6MiB/s (11.1MB/s)(57.5MiB/5434msec) 00:23:00.539 slat (usec): min=9, max=595, avg=48.52, stdev=51.51 00:23:00.540 clat (msec): min=36, max=472, avg=67.89, stdev=46.57 00:23:00.540 lat (msec): min=36, max=472, avg=67.93, stdev=46.57 00:23:00.540 clat percentiles (msec): 00:23:00.540 | 1.00th=[ 38], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.540 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.540 | 70.00th=[ 54], 80.00th=[ 66], 90.00th=[ 122], 95.00th=[ 161], 00:23:00.540 | 99.00th=[ 199], 99.50th=[ 447], 99.90th=[ 472], 99.95th=[ 472], 00:23:00.540 | 99.99th=[ 472] 00:23:00.540 bw ( KiB/s): min= 7168, max=25394, per=3.91%, avg=11676.10, stdev=5170.13, samples=10 00:23:00.540 iops : min= 56, max= 198, avg=91.10, stdev=40.27, samples=10 00:23:00.540 write: IOPS=78, BW=9.80MiB/s (10.3MB/s)(53.2MiB/5434msec); 0 zone resets 00:23:00.540 slat (usec): min=13, max=3850, avg=68.40, stdev=192.57 00:23:00.540 clat (msec): min=204, max=1164, avg=741.68, stdev=121.76 00:23:00.540 lat (msec): min=204, max=1164, avg=741.75, stdev=121.75 00:23:00.540 clat percentiles (msec): 00:23:00.540 | 1.00th=[ 326], 5.00th=[ 510], 10.00th=[ 617], 20.00th=[ 701], 00:23:00.540 | 30.00th=[ 743], 40.00th=[ 751], 50.00th=[ 760], 60.00th=[ 768], 00:23:00.540 | 70.00th=[ 776], 80.00th=[ 793], 90.00th=[ 802], 95.00th=[ 894], 00:23:00.540 | 99.00th=[ 1133], 99.50th=[ 1133], 99.90th=[ 1167], 99.95th=[ 1167], 00:23:00.540 | 99.99th=[ 1167] 00:23:00.540 bw ( KiB/s): min= 3591, max=10240, per=3.13%, avg=9368.30, stdev=2045.86, samples=10 00:23:00.540 iops : min= 28, max= 80, avg=73.10, stdev=15.98, samples=10 00:23:00.540 lat (msec) : 50=14.90%, 100=30.02%, 250=7.00%, 500=2.26%, 750=15.46% 00:23:00.540 lat (msec) : 1000=28.78%, 2000=1.58% 00:23:00.540 cpu : usr=0.28%, sys=0.74%, ctx=524, majf=0, minf=1 00:23:00.540 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:23:00.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.540 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.540 issued rwts: total=460,426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.540 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.540 job10: (groupid=0, jobs=1): err= 0: pid=80812: Thu Jul 25 09:05:06 2024 00:23:00.540 read: IOPS=83, BW=10.4MiB/s (10.9MB/s)(56.4MiB/5419msec) 00:23:00.540 slat (usec): min=10, max=1420, avg=51.90, stdev=97.58 00:23:00.540 clat (msec): min=36, max=442, avg=67.70, stdev=53.49 00:23:00.540 lat (msec): min=36, max=442, avg=67.75, stdev=53.49 00:23:00.540 clat percentiles (msec): 00:23:00.540 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.540 | 30.00th=[ 51], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.540 | 70.00th=[ 54], 80.00th=[ 56], 90.00th=[ 120], 95.00th=[ 184], 00:23:00.540 | 99.00th=[ 418], 99.50th=[ 430], 99.90th=[ 443], 99.95th=[ 443], 00:23:00.540 | 99.99th=[ 443] 00:23:00.540 bw ( KiB/s): min= 7649, max=16063, per=3.81%, avg=11383.20, stdev=3225.83, samples=10 00:23:00.540 iops : min= 59, max= 125, avg=88.20, stdev=25.28, samples=10 00:23:00.540 write: IOPS=78, BW=9.80MiB/s (10.3MB/s)(53.1MiB/5419msec); 0 zone resets 00:23:00.540 slat (usec): min=17, max=1831, avg=71.42, stdev=145.21 00:23:00.540 clat (msec): min=212, max=1171, avg=742.91, stdev=119.41 00:23:00.540 lat (msec): min=212, max=1171, avg=742.98, stdev=119.41 00:23:00.540 clat percentiles (msec): 00:23:00.540 | 1.00th=[ 342], 5.00th=[ 506], 10.00th=[ 617], 20.00th=[ 718], 00:23:00.540 | 30.00th=[ 735], 40.00th=[ 751], 50.00th=[ 751], 60.00th=[ 768], 00:23:00.540 | 70.00th=[ 768], 80.00th=[ 785], 90.00th=[ 802], 95.00th=[ 944], 00:23:00.540 | 99.00th=[ 1133], 99.50th=[ 1150], 99.90th=[ 1167], 99.95th=[ 1167], 00:23:00.540 | 99.99th=[ 1167] 00:23:00.540 bw ( KiB/s): min= 3569, max=10199, per=3.13%, avg=9367.40, stdev=2043.10, samples=10 00:23:00.540 iops : min= 27, max= 79, avg=72.40, stdev=16.00, samples=10 00:23:00.540 lat (msec) : 50=15.53%, 100=29.79%, 250=5.94%, 500=2.40%, 750=18.38% 00:23:00.540 lat (msec) : 1000=26.48%, 2000=1.48% 00:23:00.540 cpu : usr=0.28%, sys=0.72%, ctx=572, majf=0, minf=1 00:23:00.540 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.7%, >=64=92.8% 00:23:00.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.540 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.540 issued rwts: total=451,425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.540 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.540 job11: (groupid=0, jobs=1): err= 0: pid=80818: Thu Jul 25 09:05:06 2024 00:23:00.540 read: IOPS=74, BW=9599KiB/s (9830kB/s)(51.2MiB/5467msec) 00:23:00.540 slat (usec): min=9, max=2516, avg=55.28, stdev=161.69 00:23:00.540 clat (msec): min=2, max=480, avg=69.58, stdev=61.60 00:23:00.540 lat (msec): min=2, max=480, avg=69.64, stdev=61.61 00:23:00.540 clat percentiles (msec): 00:23:00.540 | 1.00th=[ 9], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 50], 00:23:00.540 | 30.00th=[ 51], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.540 | 70.00th=[ 54], 80.00th=[ 56], 90.00th=[ 116], 95.00th=[ 249], 00:23:00.540 | 99.00th=[ 264], 99.50th=[ 275], 99.90th=[ 481], 99.95th=[ 481], 00:23:00.540 | 99.99th=[ 481] 00:23:00.540 bw ( KiB/s): min= 4854, max=18688, per=3.49%, avg=10414.40, stdev=3856.94, samples=10 00:23:00.540 iops : min= 37, max= 146, avg=81.10, stdev=30.34, samples=10 00:23:00.540 write: IOPS=78, BW=9997KiB/s (10.2MB/s)(53.4MiB/5467msec); 0 zone resets 00:23:00.540 slat (usec): min=11, max=651, avg=54.01, stdev=64.39 00:23:00.540 clat (msec): min=65, max=1235, avg=751.31, stdev=133.72 00:23:00.540 lat (msec): min=65, max=1235, avg=751.36, stdev=133.73 00:23:00.540 clat percentiles (msec): 00:23:00.540 | 1.00th=[ 284], 5.00th=[ 527], 10.00th=[ 600], 20.00th=[ 718], 00:23:00.540 | 30.00th=[ 743], 40.00th=[ 751], 50.00th=[ 768], 60.00th=[ 776], 00:23:00.540 | 70.00th=[ 785], 80.00th=[ 793], 90.00th=[ 818], 95.00th=[ 986], 00:23:00.540 | 99.00th=[ 1183], 99.50th=[ 1217], 99.90th=[ 1234], 99.95th=[ 1234], 00:23:00.540 | 99.99th=[ 1234] 00:23:00.540 bw ( KiB/s): min= 3584, max=10496, per=3.13%, avg=9363.50, stdev=2043.13, samples=10 00:23:00.540 iops : min= 28, max= 82, avg=72.90, stdev=15.88, samples=10 00:23:00.540 lat (msec) : 4=0.24%, 10=0.72%, 20=0.36%, 50=13.38%, 100=28.79% 00:23:00.540 lat (msec) : 250=3.70%, 500=3.82%, 750=16.37%, 1000=30.35%, 2000=2.27% 00:23:00.540 cpu : usr=0.16%, sys=0.49%, ctx=606, majf=0, minf=1 00:23:00.540 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:23:00.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.540 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.540 issued rwts: total=410,427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.540 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.540 job12: (groupid=0, jobs=1): err= 0: pid=80819: Thu Jul 25 09:05:06 2024 00:23:00.540 read: IOPS=86, BW=10.8MiB/s (11.3MB/s)(58.8MiB/5431msec) 00:23:00.540 slat (usec): min=7, max=195, avg=37.01, stdev=20.81 00:23:00.540 clat (msec): min=35, max=470, avg=66.59, stdev=50.48 00:23:00.540 lat (msec): min=35, max=470, avg=66.63, stdev=50.48 00:23:00.540 clat percentiles (msec): 00:23:00.540 | 1.00th=[ 37], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.540 | 30.00th=[ 51], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.540 | 70.00th=[ 54], 80.00th=[ 57], 90.00th=[ 122], 95.00th=[ 167], 00:23:00.540 | 99.00th=[ 239], 99.50th=[ 447], 99.90th=[ 472], 99.95th=[ 472], 00:23:00.540 | 99.99th=[ 472] 00:23:00.540 bw ( KiB/s): min= 8192, max=19712, per=3.98%, avg=11899.70, stdev=3618.54, samples=10 00:23:00.540 iops : min= 64, max= 154, avg=92.80, stdev=28.35, samples=10 00:23:00.540 write: IOPS=78, BW=9.78MiB/s (10.3MB/s)(53.1MiB/5431msec); 0 zone resets 00:23:00.540 slat (usec): min=12, max=122, avg=43.71, stdev=18.15 00:23:00.540 clat (msec): min=215, max=1174, avg=742.97, stdev=117.36 00:23:00.540 lat (msec): min=215, max=1174, avg=743.02, stdev=117.37 00:23:00.540 clat percentiles (msec): 00:23:00.540 | 1.00th=[ 338], 5.00th=[ 535], 10.00th=[ 642], 20.00th=[ 718], 00:23:00.540 | 30.00th=[ 735], 40.00th=[ 751], 50.00th=[ 760], 60.00th=[ 768], 00:23:00.540 | 70.00th=[ 776], 80.00th=[ 785], 90.00th=[ 793], 95.00th=[ 936], 00:23:00.540 | 99.00th=[ 1133], 99.50th=[ 1133], 99.90th=[ 1183], 99.95th=[ 1183], 00:23:00.540 | 99.99th=[ 1183] 00:23:00.540 bw ( KiB/s): min= 3584, max=10240, per=3.13%, avg=9365.60, stdev=2040.55, samples=10 00:23:00.540 iops : min= 28, max= 80, avg=73.00, stdev=15.90, samples=10 00:23:00.540 lat (msec) : 50=16.42%, 100=30.06%, 250=5.92%, 500=2.12%, 750=17.99% 00:23:00.540 lat (msec) : 1000=25.92%, 2000=1.56% 00:23:00.540 cpu : usr=0.33%, sys=0.63%, ctx=491, majf=0, minf=1 00:23:00.540 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=93.0% 00:23:00.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.540 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.540 issued rwts: total=470,425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.540 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.540 job13: (groupid=0, jobs=1): err= 0: pid=80829: Thu Jul 25 09:05:06 2024 00:23:00.540 read: IOPS=79, BW=9.96MiB/s (10.4MB/s)(54.4MiB/5458msec) 00:23:00.540 slat (usec): min=7, max=389, avg=36.84, stdev=28.36 00:23:00.540 clat (msec): min=11, max=494, avg=62.60, stdev=47.09 00:23:00.540 lat (msec): min=11, max=494, avg=62.63, stdev=47.09 00:23:00.540 clat percentiles (msec): 00:23:00.540 | 1.00th=[ 27], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 49], 00:23:00.540 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.540 | 70.00th=[ 54], 80.00th=[ 56], 90.00th=[ 90], 95.00th=[ 133], 00:23:00.540 | 99.00th=[ 234], 99.50th=[ 468], 99.90th=[ 493], 99.95th=[ 493], 00:23:00.540 | 99.99th=[ 493] 00:23:00.540 bw ( KiB/s): min= 5876, max=18176, per=3.70%, avg=11055.80, stdev=3423.49, samples=10 00:23:00.540 iops : min= 45, max= 142, avg=86.20, stdev=26.90, samples=10 00:23:00.540 write: IOPS=77, BW=9944KiB/s (10.2MB/s)(53.0MiB/5458msec); 0 zone resets 00:23:00.540 slat (usec): min=12, max=1011, avg=49.66, stdev=58.52 00:23:00.540 clat (msec): min=175, max=1220, avg=758.34, stdev=120.01 00:23:00.540 lat (msec): min=175, max=1221, avg=758.38, stdev=120.02 00:23:00.540 clat percentiles (msec): 00:23:00.540 | 1.00th=[ 313], 5.00th=[ 531], 10.00th=[ 701], 20.00th=[ 735], 00:23:00.540 | 30.00th=[ 743], 40.00th=[ 751], 50.00th=[ 760], 60.00th=[ 776], 00:23:00.540 | 70.00th=[ 785], 80.00th=[ 785], 90.00th=[ 818], 95.00th=[ 944], 00:23:00.540 | 99.00th=[ 1167], 99.50th=[ 1200], 99.90th=[ 1217], 99.95th=[ 1217], 00:23:00.541 | 99.99th=[ 1217] 00:23:00.541 bw ( KiB/s): min= 3328, max=10240, per=3.12%, avg=9314.40, stdev=2120.93, samples=10 00:23:00.541 iops : min= 26, max= 80, avg=72.60, stdev=16.53, samples=10 00:23:00.541 lat (msec) : 20=0.23%, 50=16.18%, 100=30.03%, 250=4.07%, 500=2.21% 00:23:00.541 lat (msec) : 750=14.20%, 1000=31.32%, 2000=1.75% 00:23:00.541 cpu : usr=0.29%, sys=0.59%, ctx=507, majf=0, minf=1 00:23:00.541 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.7% 00:23:00.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.541 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.541 issued rwts: total=435,424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.541 job14: (groupid=0, jobs=1): err= 0: pid=80831: Thu Jul 25 09:05:06 2024 00:23:00.541 read: IOPS=75, BW=9664KiB/s (9896kB/s)(51.6MiB/5470msec) 00:23:00.541 slat (usec): min=7, max=631, avg=41.70, stdev=58.29 00:23:00.541 clat (usec): min=1045, max=493661, avg=67483.59, stdev=57683.08 00:23:00.541 lat (usec): min=1095, max=493707, avg=67525.29, stdev=57677.86 00:23:00.541 clat percentiles (msec): 00:23:00.541 | 1.00th=[ 5], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.541 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.541 | 70.00th=[ 54], 80.00th=[ 56], 90.00th=[ 113], 95.00th=[ 230], 00:23:00.541 | 99.00th=[ 268], 99.50th=[ 271], 99.90th=[ 493], 99.95th=[ 493], 00:23:00.541 | 99.99th=[ 493] 00:23:00.541 bw ( KiB/s): min= 8448, max=14336, per=3.52%, avg=10515.20, stdev=1708.39, samples=10 00:23:00.541 iops : min= 66, max= 112, avg=81.90, stdev=13.36, samples=10 00:23:00.541 write: IOPS=79, BW=9.92MiB/s (10.4MB/s)(54.2MiB/5470msec); 0 zone resets 00:23:00.541 slat (usec): min=11, max=1099, avg=54.91, stdev=91.68 00:23:00.541 clat (msec): min=4, max=1246, avg=741.14, stdev=164.07 00:23:00.541 lat (msec): min=4, max=1246, avg=741.19, stdev=164.08 00:23:00.541 clat percentiles (msec): 00:23:00.541 | 1.00th=[ 8], 5.00th=[ 481], 10.00th=[ 617], 20.00th=[ 735], 00:23:00.541 | 30.00th=[ 743], 40.00th=[ 751], 50.00th=[ 760], 60.00th=[ 768], 00:23:00.541 | 70.00th=[ 785], 80.00th=[ 793], 90.00th=[ 818], 95.00th=[ 936], 00:23:00.541 | 99.00th=[ 1167], 99.50th=[ 1217], 99.90th=[ 1250], 99.95th=[ 1250], 00:23:00.541 | 99.99th=[ 1250] 00:23:00.541 bw ( KiB/s): min= 5632, max=10240, per=3.19%, avg=9542.70, stdev=1400.34, samples=10 00:23:00.541 iops : min= 44, max= 80, avg=74.30, stdev=10.86, samples=10 00:23:00.541 lat (msec) : 2=0.12%, 10=1.77%, 20=0.12%, 50=15.94%, 100=26.56% 00:23:00.541 lat (msec) : 250=4.01%, 500=3.07%, 750=14.88%, 1000=31.29%, 2000=2.24% 00:23:00.541 cpu : usr=0.09%, sys=0.60%, ctx=564, majf=0, minf=1 00:23:00.541 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:23:00.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.541 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.541 issued rwts: total=413,434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.541 job15: (groupid=0, jobs=1): err= 0: pid=80851: Thu Jul 25 09:05:06 2024 00:23:00.541 read: IOPS=73, BW=9369KiB/s (9594kB/s)(49.9MiB/5451msec) 00:23:00.541 slat (usec): min=7, max=365, avg=41.88, stdev=34.93 00:23:00.541 clat (msec): min=21, max=489, avg=65.82, stdev=56.32 00:23:00.541 lat (msec): min=21, max=489, avg=65.87, stdev=56.32 00:23:00.541 clat percentiles (msec): 00:23:00.541 | 1.00th=[ 26], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.541 | 30.00th=[ 51], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.541 | 70.00th=[ 54], 80.00th=[ 56], 90.00th=[ 97], 95.00th=[ 138], 00:23:00.541 | 99.00th=[ 464], 99.50th=[ 472], 99.90th=[ 489], 99.95th=[ 489], 00:23:00.541 | 99.99th=[ 489] 00:23:00.541 bw ( KiB/s): min= 6912, max=16128, per=3.36%, avg=10055.50, stdev=3071.31, samples=10 00:23:00.541 iops : min= 54, max= 126, avg=78.40, stdev=23.83, samples=10 00:23:00.541 write: IOPS=77, BW=9909KiB/s (10.1MB/s)(52.8MiB/5451msec); 0 zone resets 00:23:00.541 slat (usec): min=14, max=502, avg=53.29, stdev=50.98 00:23:00.541 clat (msec): min=206, max=1217, avg=763.10, stdev=122.18 00:23:00.541 lat (msec): min=206, max=1217, avg=763.15, stdev=122.18 00:23:00.541 clat percentiles (msec): 00:23:00.541 | 1.00th=[ 347], 5.00th=[ 542], 10.00th=[ 676], 20.00th=[ 743], 00:23:00.541 | 30.00th=[ 751], 40.00th=[ 760], 50.00th=[ 768], 60.00th=[ 776], 00:23:00.541 | 70.00th=[ 785], 80.00th=[ 802], 90.00th=[ 827], 95.00th=[ 944], 00:23:00.541 | 99.00th=[ 1150], 99.50th=[ 1183], 99.90th=[ 1217], 99.95th=[ 1217], 00:23:00.541 | 99.99th=[ 1217] 00:23:00.541 bw ( KiB/s): min= 3328, max=10240, per=3.12%, avg=9314.40, stdev=2117.23, samples=10 00:23:00.541 iops : min= 26, max= 80, avg=72.60, stdev=16.49, samples=10 00:23:00.541 lat (msec) : 50=15.59%, 100=28.26%, 250=4.26%, 500=2.31%, 750=12.30% 00:23:00.541 lat (msec) : 1000=35.08%, 2000=2.19% 00:23:00.541 cpu : usr=0.20%, sys=0.62%, ctx=535, majf=0, minf=1 00:23:00.541 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.3% 00:23:00.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.541 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.541 issued rwts: total=399,422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.541 job16: (groupid=0, jobs=1): err= 0: pid=80865: Thu Jul 25 09:05:06 2024 00:23:00.541 read: IOPS=83, BW=10.5MiB/s (11.0MB/s)(56.8MiB/5423msec) 00:23:00.541 slat (usec): min=6, max=577, avg=31.78, stdev=39.16 00:23:00.541 clat (msec): min=37, max=451, avg=68.02, stdev=50.80 00:23:00.541 lat (msec): min=37, max=451, avg=68.05, stdev=50.79 00:23:00.541 clat percentiles (msec): 00:23:00.541 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 49], 00:23:00.541 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.541 | 70.00th=[ 54], 80.00th=[ 61], 90.00th=[ 123], 95.00th=[ 159], 00:23:00.541 | 99.00th=[ 426], 99.50th=[ 439], 99.90th=[ 451], 99.95th=[ 451], 00:23:00.541 | 99.99th=[ 451] 00:23:00.541 bw ( KiB/s): min= 8192, max=21290, per=3.84%, avg=11471.20, stdev=4011.16, samples=10 00:23:00.541 iops : min= 64, max= 166, avg=89.50, stdev=31.31, samples=10 00:23:00.541 write: IOPS=78, BW=9.80MiB/s (10.3MB/s)(53.1MiB/5423msec); 0 zone resets 00:23:00.541 slat (usec): min=12, max=1308, avg=45.59, stdev=72.84 00:23:00.541 clat (msec): min=194, max=1181, avg=742.14, stdev=121.69 00:23:00.541 lat (msec): min=194, max=1181, avg=742.19, stdev=121.70 00:23:00.541 clat percentiles (msec): 00:23:00.541 | 1.00th=[ 321], 5.00th=[ 510], 10.00th=[ 617], 20.00th=[ 718], 00:23:00.541 | 30.00th=[ 743], 40.00th=[ 751], 50.00th=[ 760], 60.00th=[ 768], 00:23:00.541 | 70.00th=[ 776], 80.00th=[ 776], 90.00th=[ 793], 95.00th=[ 894], 00:23:00.541 | 99.00th=[ 1133], 99.50th=[ 1150], 99.90th=[ 1183], 99.95th=[ 1183], 00:23:00.541 | 99.99th=[ 1183] 00:23:00.541 bw ( KiB/s): min= 3847, max=10240, per=3.14%, avg=9393.90, stdev=1958.14, samples=10 00:23:00.541 iops : min= 30, max= 80, avg=73.30, stdev=15.29, samples=10 00:23:00.541 lat (msec) : 50=17.63%, 100=27.08%, 250=6.71%, 500=2.50%, 750=15.93% 00:23:00.541 lat (msec) : 1000=28.44%, 2000=1.71% 00:23:00.541 cpu : usr=0.20%, sys=0.48%, ctx=536, majf=0, minf=1 00:23:00.541 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.8% 00:23:00.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.541 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.541 issued rwts: total=454,425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.541 job17: (groupid=0, jobs=1): err= 0: pid=80918: Thu Jul 25 09:05:06 2024 00:23:00.541 read: IOPS=73, BW=9414KiB/s (9640kB/s)(49.9MiB/5425msec) 00:23:00.541 slat (usec): min=8, max=840, avg=35.85, stdev=48.23 00:23:00.541 clat (msec): min=30, max=426, avg=66.22, stdev=42.40 00:23:00.541 lat (msec): min=30, max=426, avg=66.26, stdev=42.40 00:23:00.541 clat percentiles (msec): 00:23:00.541 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 50], 00:23:00.541 | 30.00th=[ 51], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.541 | 70.00th=[ 54], 80.00th=[ 58], 90.00th=[ 127], 95.00th=[ 165], 00:23:00.541 | 99.00th=[ 192], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:23:00.541 | 99.99th=[ 426] 00:23:00.541 bw ( KiB/s): min= 7168, max=20224, per=3.40%, avg=10159.40, stdev=3837.91, samples=10 00:23:00.541 iops : min= 56, max= 158, avg=79.20, stdev=30.03, samples=10 00:23:00.541 write: IOPS=79, BW=9.88MiB/s (10.4MB/s)(53.6MiB/5425msec); 0 zone resets 00:23:00.541 slat (usec): min=12, max=465, avg=47.06, stdev=38.18 00:23:00.541 clat (msec): min=199, max=1142, avg=746.57, stdev=118.92 00:23:00.541 lat (msec): min=199, max=1142, avg=746.61, stdev=118.92 00:23:00.541 clat percentiles (msec): 00:23:00.541 | 1.00th=[ 321], 5.00th=[ 502], 10.00th=[ 617], 20.00th=[ 718], 00:23:00.541 | 30.00th=[ 743], 40.00th=[ 751], 50.00th=[ 768], 60.00th=[ 776], 00:23:00.541 | 70.00th=[ 785], 80.00th=[ 793], 90.00th=[ 818], 95.00th=[ 902], 00:23:00.541 | 99.00th=[ 1099], 99.50th=[ 1116], 99.90th=[ 1150], 99.95th=[ 1150], 00:23:00.541 | 99.99th=[ 1150] 00:23:00.541 bw ( KiB/s): min= 3840, max=10240, per=3.14%, avg=9391.20, stdev=1967.10, samples=10 00:23:00.541 iops : min= 30, max= 80, avg=73.20, stdev=15.32, samples=10 00:23:00.541 lat (msec) : 50=13.77%, 100=28.62%, 250=5.92%, 500=2.29%, 750=16.06% 00:23:00.541 lat (msec) : 1000=31.76%, 2000=1.57% 00:23:00.541 cpu : usr=0.15%, sys=0.55%, ctx=537, majf=0, minf=1 00:23:00.541 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:23:00.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.541 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.541 issued rwts: total=399,429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.541 job18: (groupid=0, jobs=1): err= 0: pid=80939: Thu Jul 25 09:05:06 2024 00:23:00.541 read: IOPS=78, BW=9.87MiB/s (10.3MB/s)(53.5MiB/5421msec) 00:23:00.541 slat (usec): min=8, max=409, avg=39.76, stdev=26.89 00:23:00.541 clat (msec): min=36, max=469, avg=65.22, stdev=45.92 00:23:00.541 lat (msec): min=36, max=469, avg=65.26, stdev=45.92 00:23:00.541 clat percentiles (msec): 00:23:00.541 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 50], 00:23:00.541 | 30.00th=[ 51], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.542 | 70.00th=[ 54], 80.00th=[ 56], 90.00th=[ 122], 95.00th=[ 153], 00:23:00.542 | 99.00th=[ 197], 99.50th=[ 456], 99.90th=[ 468], 99.95th=[ 468], 00:23:00.542 | 99.99th=[ 468] 00:23:00.542 bw ( KiB/s): min= 8686, max=17338, per=3.63%, avg=10846.00, stdev=2479.82, samples=10 00:23:00.542 iops : min= 67, max= 135, avg=84.00, stdev=19.49, samples=10 00:23:00.542 write: IOPS=78, BW=9.85MiB/s (10.3MB/s)(53.4MiB/5421msec); 0 zone resets 00:23:00.542 slat (usec): min=11, max=429, avg=46.31, stdev=32.49 00:23:00.542 clat (msec): min=202, max=1148, avg=745.92, stdev=119.76 00:23:00.542 lat (msec): min=202, max=1148, avg=745.97, stdev=119.77 00:23:00.542 clat percentiles (msec): 00:23:00.542 | 1.00th=[ 326], 5.00th=[ 498], 10.00th=[ 634], 20.00th=[ 726], 00:23:00.542 | 30.00th=[ 743], 40.00th=[ 751], 50.00th=[ 760], 60.00th=[ 768], 00:23:00.542 | 70.00th=[ 776], 80.00th=[ 785], 90.00th=[ 810], 95.00th=[ 919], 00:23:00.542 | 99.00th=[ 1116], 99.50th=[ 1116], 99.90th=[ 1150], 99.95th=[ 1150], 00:23:00.542 | 99.99th=[ 1150] 00:23:00.542 bw ( KiB/s): min= 4079, max=10219, per=3.13%, avg=9367.30, stdev=1876.93, samples=10 00:23:00.542 iops : min= 31, max= 79, avg=72.40, stdev=14.70, samples=10 00:23:00.542 lat (msec) : 50=15.20%, 100=29.01%, 250=5.85%, 500=2.57%, 750=15.91% 00:23:00.542 lat (msec) : 1000=29.82%, 2000=1.64% 00:23:00.542 cpu : usr=0.30%, sys=0.59%, ctx=488, majf=0, minf=1 00:23:00.542 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.6% 00:23:00.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.542 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.542 issued rwts: total=428,427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.542 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.542 job19: (groupid=0, jobs=1): err= 0: pid=80973: Thu Jul 25 09:05:06 2024 00:23:00.542 read: IOPS=86, BW=10.8MiB/s (11.3MB/s)(58.9MiB/5456msec) 00:23:00.542 slat (usec): min=8, max=232, avg=38.72, stdev=24.75 00:23:00.542 clat (msec): min=18, max=484, avg=64.79, stdev=47.83 00:23:00.542 lat (msec): min=18, max=484, avg=64.83, stdev=47.83 00:23:00.542 clat percentiles (msec): 00:23:00.542 | 1.00th=[ 32], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.542 | 30.00th=[ 51], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.542 | 70.00th=[ 54], 80.00th=[ 56], 90.00th=[ 103], 95.00th=[ 153], 00:23:00.542 | 99.00th=[ 239], 99.50th=[ 460], 99.90th=[ 485], 99.95th=[ 485], 00:23:00.542 | 99.99th=[ 485] 00:23:00.542 bw ( KiB/s): min= 8175, max=19712, per=4.00%, avg=11949.30, stdev=3293.89, samples=10 00:23:00.542 iops : min= 63, max= 154, avg=93.10, stdev=25.95, samples=10 00:23:00.542 write: IOPS=77, BW=9947KiB/s (10.2MB/s)(53.0MiB/5456msec); 0 zone resets 00:23:00.542 slat (usec): min=12, max=596, avg=53.30, stdev=44.48 00:23:00.542 clat (msec): min=223, max=1176, avg=750.19, stdev=115.50 00:23:00.542 lat (msec): min=223, max=1176, avg=750.24, stdev=115.50 00:23:00.542 clat percentiles (msec): 00:23:00.542 | 1.00th=[ 351], 5.00th=[ 535], 10.00th=[ 667], 20.00th=[ 726], 00:23:00.542 | 30.00th=[ 743], 40.00th=[ 751], 50.00th=[ 760], 60.00th=[ 768], 00:23:00.542 | 70.00th=[ 776], 80.00th=[ 785], 90.00th=[ 802], 95.00th=[ 936], 00:23:00.542 | 99.00th=[ 1116], 99.50th=[ 1133], 99.90th=[ 1183], 99.95th=[ 1183], 00:23:00.542 | 99.99th=[ 1183] 00:23:00.542 bw ( KiB/s): min= 3072, max=10496, per=3.12%, avg=9312.30, stdev=2203.86, samples=10 00:23:00.542 iops : min= 24, max= 82, avg=72.50, stdev=17.13, samples=10 00:23:00.542 lat (msec) : 20=0.22%, 50=16.20%, 100=30.50%, 250=5.47%, 500=1.90% 00:23:00.542 lat (msec) : 750=18.44%, 1000=25.36%, 2000=1.90% 00:23:00.542 cpu : usr=0.31%, sys=0.64%, ctx=607, majf=0, minf=1 00:23:00.542 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=93.0% 00:23:00.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.542 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.542 issued rwts: total=471,424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.542 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.542 job20: (groupid=0, jobs=1): err= 0: pid=80978: Thu Jul 25 09:05:06 2024 00:23:00.542 read: IOPS=74, BW=9541KiB/s (9770kB/s)(50.8MiB/5447msec) 00:23:00.542 slat (usec): min=8, max=177, avg=39.21, stdev=22.72 00:23:00.542 clat (msec): min=38, max=467, avg=61.89, stdev=39.37 00:23:00.542 lat (msec): min=38, max=467, avg=61.93, stdev=39.37 00:23:00.542 clat percentiles (msec): 00:23:00.542 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 50], 00:23:00.542 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.542 | 70.00th=[ 53], 80.00th=[ 56], 90.00th=[ 89], 95.00th=[ 138], 00:23:00.542 | 99.00th=[ 194], 99.50th=[ 211], 99.90th=[ 468], 99.95th=[ 468], 00:23:00.542 | 99.99th=[ 468] 00:23:00.542 bw ( KiB/s): min= 6400, max=14108, per=3.46%, avg=10345.20, stdev=2635.13, samples=10 00:23:00.542 iops : min= 50, max= 110, avg=80.80, stdev=20.55, samples=10 00:23:00.542 write: IOPS=78, BW=9.78MiB/s (10.2MB/s)(53.2MiB/5447msec); 0 zone resets 00:23:00.542 slat (usec): min=13, max=153, avg=47.96, stdev=23.08 00:23:00.542 clat (msec): min=219, max=1195, avg=757.96, stdev=117.89 00:23:00.542 lat (msec): min=219, max=1195, avg=758.00, stdev=117.89 00:23:00.542 clat percentiles (msec): 00:23:00.542 | 1.00th=[ 342], 5.00th=[ 531], 10.00th=[ 651], 20.00th=[ 735], 00:23:00.542 | 30.00th=[ 751], 40.00th=[ 760], 50.00th=[ 760], 60.00th=[ 768], 00:23:00.542 | 70.00th=[ 776], 80.00th=[ 793], 90.00th=[ 827], 95.00th=[ 911], 00:23:00.542 | 99.00th=[ 1150], 99.50th=[ 1183], 99.90th=[ 1200], 99.95th=[ 1200], 00:23:00.542 | 99.99th=[ 1200] 00:23:00.542 bw ( KiB/s): min= 3591, max=10240, per=3.13%, avg=9344.70, stdev=2036.00, samples=10 00:23:00.542 iops : min= 28, max= 80, avg=73.00, stdev=15.92, samples=10 00:23:00.542 lat (msec) : 50=16.23%, 100=28.49%, 250=4.21%, 500=1.80%, 750=13.46% 00:23:00.542 lat (msec) : 1000=33.77%, 2000=2.04% 00:23:00.542 cpu : usr=0.18%, sys=0.68%, ctx=482, majf=0, minf=1 00:23:00.542 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.4% 00:23:00.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.542 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.542 issued rwts: total=406,426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.542 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.542 job21: (groupid=0, jobs=1): err= 0: pid=80979: Thu Jul 25 09:05:06 2024 00:23:00.542 read: IOPS=71, BW=9126KiB/s (9345kB/s)(48.4MiB/5428msec) 00:23:00.542 slat (usec): min=7, max=482, avg=34.73, stdev=35.25 00:23:00.542 clat (msec): min=39, max=455, avg=66.25, stdev=50.22 00:23:00.542 lat (msec): min=39, max=455, avg=66.28, stdev=50.23 00:23:00.542 clat percentiles (msec): 00:23:00.542 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 50], 00:23:00.542 | 30.00th=[ 51], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.542 | 70.00th=[ 53], 80.00th=[ 55], 90.00th=[ 113], 95.00th=[ 169], 00:23:00.542 | 99.00th=[ 435], 99.50th=[ 447], 99.90th=[ 456], 99.95th=[ 456], 00:23:00.542 | 99.99th=[ 456] 00:23:00.542 bw ( KiB/s): min= 7680, max=13056, per=3.28%, avg=9802.80, stdev=1904.28, samples=10 00:23:00.542 iops : min= 60, max= 102, avg=76.50, stdev=14.87, samples=10 00:23:00.542 write: IOPS=78, BW=9.81MiB/s (10.3MB/s)(53.2MiB/5428msec); 0 zone resets 00:23:00.542 slat (usec): min=11, max=557, avg=44.26, stdev=44.08 00:23:00.542 clat (msec): min=210, max=1163, avg=754.08, stdev=117.71 00:23:00.542 lat (msec): min=210, max=1163, avg=754.12, stdev=117.71 00:23:00.542 clat percentiles (msec): 00:23:00.542 | 1.00th=[ 334], 5.00th=[ 523], 10.00th=[ 642], 20.00th=[ 735], 00:23:00.542 | 30.00th=[ 743], 40.00th=[ 760], 50.00th=[ 768], 60.00th=[ 776], 00:23:00.542 | 70.00th=[ 785], 80.00th=[ 802], 90.00th=[ 818], 95.00th=[ 911], 00:23:00.542 | 99.00th=[ 1116], 99.50th=[ 1133], 99.90th=[ 1167], 99.95th=[ 1167], 00:23:00.542 | 99.99th=[ 1167] 00:23:00.542 bw ( KiB/s): min= 3584, max=10240, per=3.14%, avg=9393.20, stdev=2051.10, samples=10 00:23:00.542 iops : min= 28, max= 80, avg=73.30, stdev=16.00, samples=10 00:23:00.542 lat (msec) : 50=13.90%, 100=27.92%, 250=5.66%, 500=2.46%, 750=15.87% 00:23:00.542 lat (msec) : 1000=32.60%, 2000=1.60% 00:23:00.542 cpu : usr=0.18%, sys=0.48%, ctx=549, majf=0, minf=1 00:23:00.542 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:23:00.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.543 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.543 issued rwts: total=387,426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.543 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.543 job22: (groupid=0, jobs=1): err= 0: pid=80980: Thu Jul 25 09:05:06 2024 00:23:00.543 read: IOPS=87, BW=10.9MiB/s (11.5MB/s)(59.6MiB/5446msec) 00:23:00.543 slat (usec): min=9, max=417, avg=39.00, stdev=38.67 00:23:00.543 clat (msec): min=19, max=494, avg=62.30, stdev=51.14 00:23:00.543 lat (msec): min=19, max=494, avg=62.34, stdev=51.15 00:23:00.543 clat percentiles (msec): 00:23:00.543 | 1.00th=[ 25], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 49], 00:23:00.543 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.543 | 70.00th=[ 53], 80.00th=[ 55], 90.00th=[ 72], 95.00th=[ 126], 00:23:00.543 | 99.00th=[ 456], 99.50th=[ 481], 99.90th=[ 493], 99.95th=[ 493], 00:23:00.543 | 99.99th=[ 493] 00:23:00.543 bw ( KiB/s): min= 8960, max=15390, per=4.04%, avg=12081.80, stdev=1935.06, samples=10 00:23:00.543 iops : min= 70, max= 120, avg=94.20, stdev=15.19, samples=10 00:23:00.543 write: IOPS=77, BW=9918KiB/s (10.2MB/s)(52.8MiB/5446msec); 0 zone resets 00:23:00.543 slat (usec): min=13, max=403, avg=44.68, stdev=35.93 00:23:00.543 clat (msec): min=174, max=1195, avg=754.12, stdev=118.30 00:23:00.543 lat (msec): min=174, max=1195, avg=754.17, stdev=118.30 00:23:00.543 clat percentiles (msec): 00:23:00.543 | 1.00th=[ 330], 5.00th=[ 531], 10.00th=[ 667], 20.00th=[ 735], 00:23:00.543 | 30.00th=[ 743], 40.00th=[ 751], 50.00th=[ 760], 60.00th=[ 768], 00:23:00.543 | 70.00th=[ 776], 80.00th=[ 785], 90.00th=[ 827], 95.00th=[ 936], 00:23:00.543 | 99.00th=[ 1116], 99.50th=[ 1150], 99.90th=[ 1200], 99.95th=[ 1200], 00:23:00.543 | 99.99th=[ 1200] 00:23:00.543 bw ( KiB/s): min= 3334, max=10240, per=3.12%, avg=9315.00, stdev=2119.05, samples=10 00:23:00.543 iops : min= 26, max= 80, avg=72.60, stdev=16.53, samples=10 00:23:00.543 lat (msec) : 20=0.22%, 50=19.35%, 100=29.48%, 250=3.67%, 500=2.22% 00:23:00.543 lat (msec) : 750=16.80%, 1000=26.59%, 2000=1.67% 00:23:00.543 cpu : usr=0.28%, sys=0.50%, ctx=557, majf=0, minf=1 00:23:00.543 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=93.0% 00:23:00.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.543 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.543 issued rwts: total=477,422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.543 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.543 job23: (groupid=0, jobs=1): err= 0: pid=80981: Thu Jul 25 09:05:06 2024 00:23:00.543 read: IOPS=74, BW=9497KiB/s (9725kB/s)(50.2MiB/5418msec) 00:23:00.543 slat (usec): min=5, max=121, avg=36.72, stdev=19.35 00:23:00.543 clat (msec): min=39, max=450, avg=70.48, stdev=57.41 00:23:00.543 lat (msec): min=39, max=450, avg=70.52, stdev=57.40 00:23:00.543 clat percentiles (msec): 00:23:00.543 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 50], 00:23:00.543 | 30.00th=[ 51], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.543 | 70.00th=[ 54], 80.00th=[ 58], 90.00th=[ 132], 95.00th=[ 169], 00:23:00.543 | 99.00th=[ 426], 99.50th=[ 439], 99.90th=[ 451], 99.95th=[ 451], 00:23:00.543 | 99.99th=[ 451] 00:23:00.543 bw ( KiB/s): min= 6400, max=18725, per=3.39%, avg=10139.60, stdev=3853.34, samples=10 00:23:00.543 iops : min= 50, max= 146, avg=79.10, stdev=30.08, samples=10 00:23:00.543 write: IOPS=78, BW=9993KiB/s (10.2MB/s)(52.9MiB/5418msec); 0 zone resets 00:23:00.543 slat (usec): min=11, max=118, avg=47.54, stdev=19.71 00:23:00.543 clat (msec): min=209, max=1179, avg=751.48, stdev=119.55 00:23:00.543 lat (msec): min=209, max=1179, avg=751.52, stdev=119.56 00:23:00.543 clat percentiles (msec): 00:23:00.543 | 1.00th=[ 330], 5.00th=[ 542], 10.00th=[ 609], 20.00th=[ 735], 00:23:00.543 | 30.00th=[ 743], 40.00th=[ 751], 50.00th=[ 768], 60.00th=[ 776], 00:23:00.543 | 70.00th=[ 785], 80.00th=[ 793], 90.00th=[ 818], 95.00th=[ 927], 00:23:00.543 | 99.00th=[ 1099], 99.50th=[ 1133], 99.90th=[ 1183], 99.95th=[ 1183], 00:23:00.543 | 99.99th=[ 1183] 00:23:00.543 bw ( KiB/s): min= 3847, max=10240, per=3.13%, avg=9368.30, stdev=1954.95, samples=10 00:23:00.543 iops : min= 30, max= 80, avg=73.10, stdev=15.26, samples=10 00:23:00.543 lat (msec) : 50=13.82%, 100=28.36%, 250=6.18%, 500=2.42%, 750=15.64% 00:23:00.543 lat (msec) : 1000=31.76%, 2000=1.82% 00:23:00.543 cpu : usr=0.33%, sys=0.55%, ctx=497, majf=0, minf=1 00:23:00.543 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:23:00.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.543 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.543 issued rwts: total=402,423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.543 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.543 job24: (groupid=0, jobs=1): err= 0: pid=80982: Thu Jul 25 09:05:06 2024 00:23:00.543 read: IOPS=83, BW=10.5MiB/s (11.0MB/s)(56.9MiB/5437msec) 00:23:00.543 slat (usec): min=8, max=107, avg=35.37, stdev=18.28 00:23:00.543 clat (msec): min=36, max=209, avg=60.35, stdev=30.39 00:23:00.543 lat (msec): min=36, max=209, avg=60.38, stdev=30.39 00:23:00.543 clat percentiles (msec): 00:23:00.543 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.543 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.543 | 70.00th=[ 53], 80.00th=[ 55], 90.00th=[ 84], 95.00th=[ 138], 00:23:00.543 | 99.00th=[ 197], 99.50th=[ 201], 99.90th=[ 209], 99.95th=[ 209], 00:23:00.543 | 99.99th=[ 209] 00:23:00.543 bw ( KiB/s): min= 8960, max=15903, per=3.90%, avg=11649.10, stdev=1876.73, samples=10 00:23:00.543 iops : min= 70, max= 124, avg=90.90, stdev=14.70, samples=10 00:23:00.543 write: IOPS=79, BW=9.89MiB/s (10.4MB/s)(53.8MiB/5437msec); 0 zone resets 00:23:00.543 slat (usec): min=13, max=102, avg=42.84, stdev=18.68 00:23:00.543 clat (msec): min=211, max=1158, avg=744.17, stdev=118.07 00:23:00.543 lat (msec): min=211, max=1158, avg=744.22, stdev=118.08 00:23:00.543 clat percentiles (msec): 00:23:00.543 | 1.00th=[ 334], 5.00th=[ 493], 10.00th=[ 600], 20.00th=[ 726], 00:23:00.543 | 30.00th=[ 743], 40.00th=[ 751], 50.00th=[ 760], 60.00th=[ 768], 00:23:00.543 | 70.00th=[ 776], 80.00th=[ 785], 90.00th=[ 802], 95.00th=[ 919], 00:23:00.543 | 99.00th=[ 1083], 99.50th=[ 1116], 99.90th=[ 1167], 99.95th=[ 1167], 00:23:00.543 | 99.99th=[ 1167] 00:23:00.543 bw ( KiB/s): min= 3591, max=10240, per=3.13%, avg=9368.20, stdev=2038.40, samples=10 00:23:00.543 iops : min= 28, max= 80, avg=73.10, stdev=15.91, samples=10 00:23:00.543 lat (msec) : 50=18.64%, 100=28.59%, 250=4.52%, 500=2.26%, 750=16.95% 00:23:00.543 lat (msec) : 1000=27.57%, 2000=1.47% 00:23:00.543 cpu : usr=0.17%, sys=0.74%, ctx=495, majf=0, minf=1 00:23:00.543 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:23:00.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.543 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.543 issued rwts: total=455,430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.543 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.543 job25: (groupid=0, jobs=1): err= 0: pid=80983: Thu Jul 25 09:05:06 2024 00:23:00.543 read: IOPS=80, BW=10.0MiB/s (10.5MB/s)(54.4MiB/5424msec) 00:23:00.543 slat (nsec): min=8192, max=89339, avg=27131.08, stdev=12127.01 00:23:00.543 clat (msec): min=34, max=204, avg=62.04, stdev=32.62 00:23:00.543 lat (msec): min=34, max=204, avg=62.07, stdev=32.61 00:23:00.543 clat percentiles (msec): 00:23:00.543 | 1.00th=[ 38], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.543 | 30.00th=[ 51], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.543 | 70.00th=[ 54], 80.00th=[ 57], 90.00th=[ 93], 95.00th=[ 157], 00:23:00.543 | 99.00th=[ 192], 99.50th=[ 203], 99.90th=[ 205], 99.95th=[ 205], 00:23:00.543 | 99.99th=[ 205] 00:23:00.543 bw ( KiB/s): min= 8686, max=15903, per=3.73%, avg=11135.00, stdev=2131.75, samples=10 00:23:00.543 iops : min= 67, max= 124, avg=86.80, stdev=16.69, samples=10 00:23:00.543 write: IOPS=79, BW=9.89MiB/s (10.4MB/s)(53.6MiB/5424msec); 0 zone resets 00:23:00.543 slat (usec): min=11, max=108, avg=34.88, stdev=14.69 00:23:00.543 clat (msec): min=203, max=1199, avg=745.07, stdev=120.50 00:23:00.543 lat (msec): min=203, max=1199, avg=745.10, stdev=120.50 00:23:00.543 clat percentiles (msec): 00:23:00.543 | 1.00th=[ 330], 5.00th=[ 498], 10.00th=[ 617], 20.00th=[ 726], 00:23:00.543 | 30.00th=[ 735], 40.00th=[ 751], 50.00th=[ 760], 60.00th=[ 768], 00:23:00.543 | 70.00th=[ 776], 80.00th=[ 785], 90.00th=[ 802], 95.00th=[ 936], 00:23:00.543 | 99.00th=[ 1133], 99.50th=[ 1167], 99.90th=[ 1200], 99.95th=[ 1200], 00:23:00.543 | 99.99th=[ 1200] 00:23:00.543 bw ( KiB/s): min= 3591, max=10240, per=3.13%, avg=9366.20, stdev=2037.74, samples=10 00:23:00.543 iops : min= 28, max= 80, avg=73.00, stdev=15.87, samples=10 00:23:00.543 lat (msec) : 50=15.28%, 100=30.32%, 250=5.09%, 500=2.20%, 750=17.13% 00:23:00.543 lat (msec) : 1000=28.59%, 2000=1.39% 00:23:00.543 cpu : usr=0.17%, sys=0.50%, ctx=512, majf=0, minf=1 00:23:00.543 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.7% 00:23:00.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.543 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.543 issued rwts: total=435,429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.543 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.543 job26: (groupid=0, jobs=1): err= 0: pid=80984: Thu Jul 25 09:05:06 2024 00:23:00.543 read: IOPS=72, BW=9249KiB/s (9471kB/s)(49.0MiB/5425msec) 00:23:00.543 slat (usec): min=7, max=147, avg=40.89, stdev=24.66 00:23:00.543 clat (msec): min=35, max=438, avg=68.18, stdev=51.73 00:23:00.543 lat (msec): min=35, max=438, avg=68.22, stdev=51.73 00:23:00.543 clat percentiles (msec): 00:23:00.543 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.543 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.543 | 70.00th=[ 54], 80.00th=[ 57], 90.00th=[ 115], 95.00th=[ 180], 00:23:00.543 | 99.00th=[ 426], 99.50th=[ 426], 99.90th=[ 439], 99.95th=[ 439], 00:23:00.543 | 99.99th=[ 439] 00:23:00.543 bw ( KiB/s): min= 7936, max=17664, per=3.31%, avg=9903.50, stdev=2847.08, samples=10 00:23:00.543 iops : min= 62, max= 138, avg=77.20, stdev=22.31, samples=10 00:23:00.543 write: IOPS=78, BW=9.79MiB/s (10.3MB/s)(53.1MiB/5425msec); 0 zone resets 00:23:00.543 slat (usec): min=11, max=638, avg=56.05, stdev=48.88 00:23:00.543 clat (msec): min=210, max=1154, avg=752.82, stdev=119.40 00:23:00.544 lat (msec): min=210, max=1154, avg=752.88, stdev=119.41 00:23:00.544 clat percentiles (msec): 00:23:00.544 | 1.00th=[ 334], 5.00th=[ 502], 10.00th=[ 642], 20.00th=[ 726], 00:23:00.544 | 30.00th=[ 743], 40.00th=[ 760], 50.00th=[ 768], 60.00th=[ 776], 00:23:00.544 | 70.00th=[ 785], 80.00th=[ 793], 90.00th=[ 818], 95.00th=[ 919], 00:23:00.544 | 99.00th=[ 1116], 99.50th=[ 1150], 99.90th=[ 1150], 99.95th=[ 1150], 00:23:00.544 | 99.99th=[ 1150] 00:23:00.544 bw ( KiB/s): min= 3328, max=10240, per=3.13%, avg=9365.50, stdev=2134.15, samples=10 00:23:00.544 iops : min= 26, max= 80, avg=73.00, stdev=16.61, samples=10 00:23:00.544 lat (msec) : 50=16.52%, 100=25.58%, 250=5.75%, 500=2.57%, 750=14.93% 00:23:00.544 lat (msec) : 1000=32.80%, 2000=1.84% 00:23:00.544 cpu : usr=0.20%, sys=0.70%, ctx=526, majf=0, minf=1 00:23:00.544 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:23:00.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.544 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.544 issued rwts: total=392,425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.544 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.544 job27: (groupid=0, jobs=1): err= 0: pid=80985: Thu Jul 25 09:05:06 2024 00:23:00.544 read: IOPS=73, BW=9423KiB/s (9649kB/s)(50.1MiB/5447msec) 00:23:00.544 slat (usec): min=6, max=451, avg=35.82, stdev=35.60 00:23:00.544 clat (msec): min=25, max=490, avg=69.14, stdev=54.23 00:23:00.544 lat (msec): min=25, max=490, avg=69.18, stdev=54.23 00:23:00.544 clat percentiles (msec): 00:23:00.544 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.544 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.544 | 70.00th=[ 54], 80.00th=[ 58], 90.00th=[ 124], 95.00th=[ 165], 00:23:00.544 | 99.00th=[ 257], 99.50th=[ 477], 99.90th=[ 489], 99.95th=[ 489], 00:23:00.544 | 99.99th=[ 489] 00:23:00.544 bw ( KiB/s): min= 6912, max=19456, per=3.40%, avg=10159.90, stdev=3768.35, samples=10 00:23:00.544 iops : min= 54, max= 152, avg=79.20, stdev=29.55, samples=10 00:23:00.544 write: IOPS=77, BW=9940KiB/s (10.2MB/s)(52.9MiB/5447msec); 0 zone resets 00:23:00.544 slat (usec): min=8, max=441, avg=47.66, stdev=44.23 00:23:00.544 clat (msec): min=217, max=1217, avg=757.25, stdev=119.57 00:23:00.544 lat (msec): min=217, max=1217, avg=757.30, stdev=119.57 00:23:00.544 clat percentiles (msec): 00:23:00.544 | 1.00th=[ 347], 5.00th=[ 542], 10.00th=[ 667], 20.00th=[ 735], 00:23:00.544 | 30.00th=[ 743], 40.00th=[ 760], 50.00th=[ 760], 60.00th=[ 768], 00:23:00.544 | 70.00th=[ 776], 80.00th=[ 785], 90.00th=[ 802], 95.00th=[ 944], 00:23:00.544 | 99.00th=[ 1183], 99.50th=[ 1200], 99.90th=[ 1217], 99.95th=[ 1217], 00:23:00.544 | 99.99th=[ 1217] 00:23:00.544 bw ( KiB/s): min= 3072, max=10240, per=3.12%, avg=9314.40, stdev=2208.13, samples=10 00:23:00.544 iops : min= 24, max= 80, avg=72.60, stdev=17.20, samples=10 00:23:00.544 lat (msec) : 50=12.62%, 100=29.37%, 250=6.43%, 500=1.94%, 750=15.17% 00:23:00.544 lat (msec) : 1000=32.40%, 2000=2.06% 00:23:00.544 cpu : usr=0.31%, sys=0.33%, ctx=598, majf=0, minf=1 00:23:00.544 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:23:00.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.544 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.544 issued rwts: total=401,423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.544 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.544 job28: (groupid=0, jobs=1): err= 0: pid=80986: Thu Jul 25 09:05:06 2024 00:23:00.544 read: IOPS=88, BW=11.1MiB/s (11.6MB/s)(60.2MiB/5435msec) 00:23:00.544 slat (usec): min=7, max=195, avg=32.95, stdev=19.24 00:23:00.544 clat (msec): min=35, max=464, avg=64.57, stdev=42.57 00:23:00.544 lat (msec): min=35, max=464, avg=64.60, stdev=42.57 00:23:00.544 clat percentiles (msec): 00:23:00.544 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.544 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.544 | 70.00th=[ 54], 80.00th=[ 57], 90.00th=[ 103], 95.00th=[ 163], 00:23:00.544 | 99.00th=[ 218], 99.50th=[ 232], 99.90th=[ 464], 99.95th=[ 464], 00:23:00.544 | 99.99th=[ 464] 00:23:00.544 bw ( KiB/s): min= 8704, max=21547, per=4.10%, avg=12264.10, stdev=3790.03, samples=10 00:23:00.544 iops : min= 68, max= 168, avg=95.70, stdev=29.51, samples=10 00:23:00.544 write: IOPS=78, BW=9.82MiB/s (10.3MB/s)(53.4MiB/5435msec); 0 zone resets 00:23:00.544 slat (usec): min=11, max=189, avg=43.05, stdev=21.79 00:23:00.544 clat (msec): min=211, max=1205, avg=740.50, stdev=121.75 00:23:00.544 lat (msec): min=211, max=1205, avg=740.55, stdev=121.75 00:23:00.544 clat percentiles (msec): 00:23:00.544 | 1.00th=[ 334], 5.00th=[ 506], 10.00th=[ 625], 20.00th=[ 718], 00:23:00.544 | 30.00th=[ 735], 40.00th=[ 743], 50.00th=[ 760], 60.00th=[ 768], 00:23:00.544 | 70.00th=[ 776], 80.00th=[ 776], 90.00th=[ 793], 95.00th=[ 919], 00:23:00.544 | 99.00th=[ 1133], 99.50th=[ 1167], 99.90th=[ 1200], 99.95th=[ 1200], 00:23:00.544 | 99.99th=[ 1200] 00:23:00.544 bw ( KiB/s): min= 3591, max=10240, per=3.13%, avg=9368.20, stdev=2038.40, samples=10 00:23:00.544 iops : min= 28, max= 80, avg=73.10, stdev=15.91, samples=10 00:23:00.544 lat (msec) : 50=17.60%, 100=30.03%, 250=5.50%, 500=2.09%, 750=18.70% 00:23:00.544 lat (msec) : 1000=24.09%, 2000=1.98% 00:23:00.544 cpu : usr=0.18%, sys=0.55%, ctx=656, majf=0, minf=1 00:23:00.544 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:23:00.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.544 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.544 issued rwts: total=482,427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.544 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.544 job29: (groupid=0, jobs=1): err= 0: pid=80987: Thu Jul 25 09:05:06 2024 00:23:00.544 read: IOPS=84, BW=10.6MiB/s (11.1MB/s)(57.8MiB/5456msec) 00:23:00.544 slat (usec): min=10, max=183, avg=31.34, stdev=16.41 00:23:00.544 clat (msec): min=5, max=500, avg=68.06, stdev=59.97 00:23:00.544 lat (msec): min=5, max=500, avg=68.09, stdev=59.97 00:23:00.544 clat percentiles (msec): 00:23:00.544 | 1.00th=[ 10], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 50], 00:23:00.544 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:00.544 | 70.00th=[ 54], 80.00th=[ 57], 90.00th=[ 111], 95.00th=[ 209], 00:23:00.544 | 99.00th=[ 266], 99.50th=[ 477], 99.90th=[ 502], 99.95th=[ 502], 00:23:00.544 | 99.99th=[ 502] 00:23:00.544 bw ( KiB/s): min= 7920, max=23040, per=3.92%, avg=11719.20, stdev=4291.85, samples=10 00:23:00.544 iops : min= 61, max= 180, avg=91.30, stdev=33.70, samples=10 00:23:00.544 write: IOPS=77, BW=9947KiB/s (10.2MB/s)(53.0MiB/5456msec); 0 zone resets 00:23:00.544 slat (usec): min=13, max=217, avg=40.17, stdev=19.96 00:23:00.544 clat (msec): min=77, max=1190, avg=748.10, stdev=122.06 00:23:00.544 lat (msec): min=77, max=1190, avg=748.14, stdev=122.06 00:23:00.544 clat percentiles (msec): 00:23:00.544 | 1.00th=[ 288], 5.00th=[ 550], 10.00th=[ 651], 20.00th=[ 726], 00:23:00.544 | 30.00th=[ 743], 40.00th=[ 751], 50.00th=[ 760], 60.00th=[ 768], 00:23:00.544 | 70.00th=[ 776], 80.00th=[ 785], 90.00th=[ 810], 95.00th=[ 911], 00:23:00.544 | 99.00th=[ 1167], 99.50th=[ 1183], 99.90th=[ 1183], 99.95th=[ 1183], 00:23:00.544 | 99.99th=[ 1183] 00:23:00.544 bw ( KiB/s): min= 3584, max=10240, per=3.12%, avg=9338.00, stdev=2036.36, samples=10 00:23:00.544 iops : min= 28, max= 80, avg=72.70, stdev=15.84, samples=10 00:23:00.544 lat (msec) : 10=0.90%, 20=0.34%, 50=16.48%, 100=29.01%, 250=3.84% 00:23:00.544 lat (msec) : 500=3.05%, 750=17.38%, 1000=27.31%, 2000=1.69% 00:23:00.544 cpu : usr=0.07%, sys=0.64%, ctx=573, majf=0, minf=1 00:23:00.544 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:23:00.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.544 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:00.544 issued rwts: total=462,424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.544 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:00.544 00:23:00.544 Run status group 0 (all jobs): 00:23:00.544 READ: bw=292MiB/s (306MB/s), 8713KiB/s-11.1MiB/s (8922kB/s-11.6MB/s), io=1596MiB (1674MB), run=5418-5470msec 00:23:00.544 WRITE: bw=292MiB/s (306MB/s), 9909KiB/s-9.92MiB/s (10.1MB/s-10.4MB/s), io=1596MiB (1674MB), run=5418-5470msec 00:23:00.544 00:23:00.544 Disk stats (read/write): 00:23:00.544 sda: ios=451/405, merge=0/0, ticks=23623/300986, in_queue=324609, util=93.56% 00:23:00.544 sdb: ios=420/404, merge=0/0, ticks=22892/300717, in_queue=323610, util=93.66% 00:23:00.544 sdc: ios=489/404, merge=0/0, ticks=25630/298768, in_queue=324398, util=93.92% 00:23:00.544 sdd: ios=464/403, merge=0/0, ticks=25414/296848, in_queue=322263, util=94.13% 00:23:00.544 sde: ios=462/403, merge=0/0, ticks=26372/296310, in_queue=322683, util=94.20% 00:23:00.544 sdf: ios=492/403, merge=0/0, ticks=26757/295371, in_queue=322128, util=94.30% 00:23:00.544 sdg: ios=471/403, merge=0/0, ticks=25580/295865, in_queue=321446, util=94.49% 00:23:00.544 sdh: ios=433/403, merge=0/0, ticks=23878/298626, in_queue=322504, util=94.85% 00:23:00.544 sdi: ios=417/403, merge=0/0, ticks=24166/297825, in_queue=321991, util=94.18% 00:23:00.544 sdj: ios=494/402, merge=0/0, ticks=30076/291721, in_queue=321798, util=95.09% 00:23:00.544 sdk: ios=475/402, merge=0/0, ticks=28413/292408, in_queue=320822, util=92.75% 00:23:00.544 sdl: ios=423/406, merge=0/0, ticks=27552/297072, in_queue=324625, util=94.62% 00:23:00.544 sdm: ios=495/402, merge=0/0, ticks=29765/291914, in_queue=321680, util=95.14% 00:23:00.544 sdn: ios=457/404, merge=0/0, ticks=25984/297897, in_queue=323881, util=95.81% 00:23:00.544 sdo: ios=438/414, merge=0/0, ticks=26946/297752, in_queue=324698, util=95.74% 00:23:00.544 sdp: ios=413/403, merge=0/0, ticks=24138/300241, in_queue=324379, util=95.81% 00:23:00.544 sdq: ios=476/403, merge=0/0, ticks=28923/292394, in_queue=321318, util=95.21% 00:23:00.544 sdr: ios=399/402, merge=0/0, ticks=25612/295235, in_queue=320848, util=95.40% 00:23:00.544 sds: ios=428/403, merge=0/0, ticks=26605/294566, in_queue=321172, util=95.43% 00:23:00.544 sdt: ios=471/403, merge=0/0, ticks=29153/294443, in_queue=323597, util=96.18% 00:23:00.544 sdu: ios=406/403, merge=0/0, ticks=24277/298062, in_queue=322340, util=96.31% 00:23:00.544 sdv: ios=387/402, merge=0/0, ticks=23969/297263, in_queue=321232, util=96.22% 00:23:00.544 sdw: ios=477/404, merge=0/0, ticks=27523/296835, in_queue=324359, util=96.95% 00:23:00.544 sdx: ios=402/402, merge=0/0, ticks=25941/295713, in_queue=321655, util=96.36% 00:23:00.544 sdy: ios=455/403, merge=0/0, ticks=27427/294675, in_queue=322102, util=96.51% 00:23:00.545 sdz: ios=435/402, merge=0/0, ticks=26955/294079, in_queue=321034, util=96.32% 00:23:00.545 sdaa: ios=392/402, merge=0/0, ticks=25104/296745, in_queue=321850, util=96.39% 00:23:00.545 sdab: ios=401/403, merge=0/0, ticks=25907/297451, in_queue=323359, util=96.98% 00:23:00.545 sdac: ios=482/403, merge=0/0, ticks=30208/291872, in_queue=322080, util=96.70% 00:23:00.545 sdad: ios=462/405, merge=0/0, ticks=29655/294316, in_queue=323972, util=97.92% 00:23:00.545 [2024-07-25 09:05:06.908011] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.545 [2024-07-25 09:05:06.910251] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.545 [2024-07-25 09:05:06.912384] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.545 [2024-07-25 09:05:06.917898] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.545 [2024-07-25 09:05:06.920338] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.545 [2024-07-25 09:05:06.922622] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.545 [2024-07-25 09:05:06.925075] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.545 [2024-07-25 09:05:06.927064] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.545 [2024-07-25 09:05:06.929191] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.545 [2024-07-25 09:05:06.931270] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.545 [2024-07-25 09:05:06.933578] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.545 [2024-07-25 09:05:06.935664] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.545 [2024-07-25 09:05:06.937964] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.545 [2024-07-25 09:05:06.940139] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.545 09:05:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 262144 -d 16 -t randwrite -r 10 00:23:00.545 [2024-07-25 09:05:06.942246] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.545 [global] 00:23:00.545 thread=1 00:23:00.545 invalidate=1 00:23:00.545 rw=randwrite 00:23:00.545 time_based=1 00:23:00.545 runtime=10 00:23:00.545 ioengine=libaio 00:23:00.545 direct=1 00:23:00.545 bs=262144 00:23:00.545 iodepth=16 00:23:00.545 norandommap=1 00:23:00.545 numjobs=1 00:23:00.545 00:23:00.545 [job0] 00:23:00.545 filename=/dev/sda 00:23:00.545 [job1] 00:23:00.545 filename=/dev/sdb 00:23:00.545 [job2] 00:23:00.545 filename=/dev/sdc 00:23:00.545 [job3] 00:23:00.545 filename=/dev/sdd 00:23:00.545 [job4] 00:23:00.545 filename=/dev/sde 00:23:00.545 [job5] 00:23:00.545 filename=/dev/sdf 00:23:00.545 [job6] 00:23:00.545 filename=/dev/sdg 00:23:00.545 [job7] 00:23:00.545 filename=/dev/sdh 00:23:00.545 [job8] 00:23:00.545 filename=/dev/sdi 00:23:00.545 [job9] 00:23:00.545 filename=/dev/sdj 00:23:00.545 [job10] 00:23:00.545 filename=/dev/sdk 00:23:00.545 [job11] 00:23:00.545 filename=/dev/sdl 00:23:00.545 [job12] 00:23:00.545 filename=/dev/sdm 00:23:00.545 [job13] 00:23:00.545 filename=/dev/sdn 00:23:00.545 [job14] 00:23:00.545 filename=/dev/sdo 00:23:00.545 [job15] 00:23:00.545 filename=/dev/sdp 00:23:00.545 [job16] 00:23:00.545 filename=/dev/sdq 00:23:00.545 [job17] 00:23:00.545 filename=/dev/sdr 00:23:00.545 [job18] 00:23:00.545 filename=/dev/sds 00:23:00.545 [job19] 00:23:00.545 filename=/dev/sdt 00:23:00.545 [job20] 00:23:00.545 filename=/dev/sdu 00:23:00.545 [job21] 00:23:00.545 filename=/dev/sdv 00:23:00.545 [job22] 00:23:00.545 filename=/dev/sdw 00:23:00.545 [job23] 00:23:00.545 filename=/dev/sdx 00:23:00.545 [job24] 00:23:00.545 filename=/dev/sdy 00:23:00.545 [job25] 00:23:00.545 filename=/dev/sdz 00:23:00.545 [job26] 00:23:00.545 filename=/dev/sdaa 00:23:00.545 [job27] 00:23:00.545 filename=/dev/sdab 00:23:00.545 [job28] 00:23:00.545 filename=/dev/sdac 00:23:00.545 [job29] 00:23:00.545 filename=/dev/sdad 00:23:00.545 queue_depth set to 113 (sda) 00:23:00.545 queue_depth set to 113 (sdb) 00:23:00.545 queue_depth set to 113 (sdc) 00:23:00.545 queue_depth set to 113 (sdd) 00:23:00.545 queue_depth set to 113 (sde) 00:23:00.545 queue_depth set to 113 (sdf) 00:23:00.545 queue_depth set to 113 (sdg) 00:23:00.545 queue_depth set to 113 (sdh) 00:23:00.545 queue_depth set to 113 (sdi) 00:23:00.545 queue_depth set to 113 (sdj) 00:23:00.545 queue_depth set to 113 (sdk) 00:23:00.545 queue_depth set to 113 (sdl) 00:23:00.545 queue_depth set to 113 (sdm) 00:23:00.545 queue_depth set to 113 (sdn) 00:23:00.545 queue_depth set to 113 (sdo) 00:23:00.545 queue_depth set to 113 (sdp) 00:23:00.545 queue_depth set to 113 (sdq) 00:23:00.545 queue_depth set to 113 (sdr) 00:23:00.545 queue_depth set to 113 (sds) 00:23:00.545 queue_depth set to 113 (sdt) 00:23:00.545 queue_depth set to 113 (sdu) 00:23:00.545 queue_depth set to 113 (sdv) 00:23:00.545 queue_depth set to 113 (sdw) 00:23:00.545 queue_depth set to 113 (sdx) 00:23:00.545 queue_depth set to 113 (sdy) 00:23:00.545 queue_depth set to 113 (sdz) 00:23:00.545 queue_depth set to 113 (sdaa) 00:23:00.545 queue_depth set to 113 (sdab) 00:23:00.545 queue_depth set to 113 (sdac) 00:23:00.545 queue_depth set to 113 (sdad) 00:23:00.805 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job11: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job12: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job13: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job14: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job15: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job16: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job17: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job18: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job19: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job20: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job21: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job22: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job23: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job24: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job25: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job26: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job27: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job28: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 job29: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:00.805 fio-3.35 00:23:00.805 Starting 30 threads 00:23:00.805 [2024-07-25 09:05:07.810772] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.805 [2024-07-25 09:05:07.815479] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.805 [2024-07-25 09:05:07.819302] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.805 [2024-07-25 09:05:07.823095] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.805 [2024-07-25 09:05:07.827230] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.805 [2024-07-25 09:05:07.830585] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.805 [2024-07-25 09:05:07.833523] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.805 [2024-07-25 09:05:07.836506] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.805 [2024-07-25 09:05:07.839117] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.805 [2024-07-25 09:05:07.842300] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.805 [2024-07-25 09:05:07.845245] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.805 [2024-07-25 09:05:07.847815] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.805 [2024-07-25 09:05:07.850043] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.852293] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.854300] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.856402] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.858448] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.860448] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.862201] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.864062] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.866425] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.868575] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.870418] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.872341] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.874243] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.876080] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.878012] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.879895] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.881715] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:00.806 [2024-07-25 09:05:07.883439] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.438361] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.459540] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.469384] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.473264] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.477647] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.481460] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.484389] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.487437] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.490657] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.493672] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.496001] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.498433] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.503912] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.505763] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.507824] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.509811] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.511824] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.513793] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 [2024-07-25 09:05:18.515749] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.020 00:23:13.020 job0: (groupid=0, jobs=1): err= 0: pid=81487: Thu Jul 25 09:05:18 2024 00:23:13.020 write: IOPS=73, BW=18.4MiB/s (19.3MB/s)(188MiB/10194msec); 0 zone resets 00:23:13.020 slat (usec): min=30, max=270, avg=72.97, stdev=19.93 00:23:13.020 clat (msec): min=3, max=441, avg=216.86, stdev=31.16 00:23:13.020 lat (msec): min=4, max=441, avg=216.94, stdev=31.16 00:23:13.020 clat percentiles (msec): 00:23:13.020 | 1.00th=[ 44], 5.00th=[ 211], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.020 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.020 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 230], 00:23:13.020 | 99.00th=[ 347], 99.50th=[ 401], 99.90th=[ 443], 99.95th=[ 443], 00:23:13.020 | 99.99th=[ 443] 00:23:13.020 bw ( KiB/s): min=16416, max=20992, per=3.35%, avg=18806.25, stdev=839.67, samples=20 00:23:13.020 iops : min= 64, max= 82, avg=73.20, stdev= 3.32, samples=20 00:23:13.020 lat (msec) : 4=0.13%, 10=0.27%, 20=0.27%, 50=0.40%, 100=0.53% 00:23:13.020 lat (msec) : 250=95.87%, 500=2.53% 00:23:13.020 cpu : usr=0.37%, sys=0.35%, ctx=758, majf=0, minf=1 00:23:13.020 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.020 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.020 issued rwts: total=0,751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.020 job1: (groupid=0, jobs=1): err= 0: pid=81488: Thu Jul 25 09:05:18 2024 00:23:13.020 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(187MiB/10173msec); 0 zone resets 00:23:13.020 slat (usec): min=29, max=246, avg=76.65, stdev=17.16 00:23:13.020 clat (msec): min=7, max=454, avg=217.85, stdev=30.44 00:23:13.020 lat (msec): min=7, max=454, avg=217.93, stdev=30.45 00:23:13.020 clat percentiles (msec): 00:23:13.020 | 1.00th=[ 75], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.020 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.020 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 228], 00:23:13.020 | 99.00th=[ 363], 99.50th=[ 414], 99.90th=[ 456], 99.95th=[ 456], 00:23:13.020 | 99.99th=[ 456] 00:23:13.020 bw ( KiB/s): min=15329, max=19456, per=3.33%, avg=18702.50, stdev=873.24, samples=20 00:23:13.020 iops : min= 59, max= 76, avg=72.80, stdev= 3.53, samples=20 00:23:13.020 lat (msec) : 10=0.13%, 20=0.27%, 50=0.40%, 100=0.40%, 250=96.65% 00:23:13.020 lat (msec) : 500=2.14% 00:23:13.020 cpu : usr=0.33%, sys=0.38%, ctx=746, majf=0, minf=1 00:23:13.020 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.020 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.020 issued rwts: total=0,746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.020 job2: (groupid=0, jobs=1): err= 0: pid=81489: Thu Jul 25 09:05:18 2024 00:23:13.020 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10172msec); 0 zone resets 00:23:13.020 slat (usec): min=17, max=581, avg=73.41, stdev=34.73 00:23:13.020 clat (msec): min=22, max=426, avg=218.13, stdev=24.55 00:23:13.020 lat (msec): min=22, max=426, avg=218.20, stdev=24.55 00:23:13.020 clat percentiles (msec): 00:23:13.020 | 1.00th=[ 117], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.020 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.021 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 234], 00:23:13.021 | 99.00th=[ 292], 99.50th=[ 388], 99.90th=[ 426], 99.95th=[ 426], 00:23:13.021 | 99.99th=[ 426] 00:23:13.021 bw ( KiB/s): min=16384, max=19456, per=3.33%, avg=18684.25, stdev=674.84, samples=20 00:23:13.021 iops : min= 64, max= 76, avg=72.90, stdev= 2.65, samples=20 00:23:13.021 lat (msec) : 50=0.40%, 100=0.40%, 250=96.64%, 500=2.55% 00:23:13.021 cpu : usr=0.27%, sys=0.37%, ctx=753, majf=0, minf=1 00:23:13.021 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.021 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.021 issued rwts: total=0,745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.021 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.021 job3: (groupid=0, jobs=1): err= 0: pid=81490: [2024-07-25 09:05:18.517671] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.021 Thu Jul 25 09:05:18 2024 00:23:13.021 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10172msec); 0 zone resets 00:23:13.021 slat (usec): min=19, max=154, avg=69.09, stdev=14.54 00:23:13.021 clat (msec): min=24, max=424, avg=218.14, stdev=24.16 00:23:13.021 lat (msec): min=24, max=424, avg=218.21, stdev=24.17 00:23:13.021 clat percentiles (msec): 00:23:13.021 | 1.00th=[ 120], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.021 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.021 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 232], 00:23:13.021 | 99.00th=[ 288], 99.50th=[ 384], 99.90th=[ 426], 99.95th=[ 426], 00:23:13.021 | 99.99th=[ 426] 00:23:13.021 bw ( KiB/s): min=16896, max=19456, per=3.33%, avg=18691.70, stdev=540.24, samples=20 00:23:13.021 iops : min= 66, max= 76, avg=73.00, stdev= 2.10, samples=20 00:23:13.021 lat (msec) : 50=0.27%, 100=0.54%, 250=96.51%, 500=2.68% 00:23:13.021 cpu : usr=0.23%, sys=0.37%, ctx=744, majf=0, minf=1 00:23:13.021 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.021 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.021 issued rwts: total=0,745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.021 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.021 job4: (groupid=0, jobs=1): err= 0: pid=81492: Thu Jul 25 09:05:18 2024 00:23:13.021 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10173msec); 0 zone resets 00:23:13.021 slat (usec): min=27, max=350, avg=101.10, stdev=49.53 00:23:13.021 clat (msec): min=24, max=426, avg=218.13, stdev=24.38 00:23:13.021 lat (msec): min=24, max=426, avg=218.23, stdev=24.38 00:23:13.021 clat percentiles (msec): 00:23:13.021 | 1.00th=[ 118], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.021 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.021 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 232], 00:23:13.021 | 99.00th=[ 288], 99.50th=[ 388], 99.90th=[ 426], 99.95th=[ 426], 00:23:13.021 | 99.99th=[ 426] 00:23:13.021 bw ( KiB/s): min=16896, max=19456, per=3.33%, avg=18687.95, stdev=538.39, samples=20 00:23:13.021 iops : min= 66, max= 76, avg=72.95, stdev= 2.09, samples=20 00:23:13.021 lat (msec) : 50=0.27%, 100=0.54%, 250=96.51%, 500=2.68% 00:23:13.021 cpu : usr=0.27%, sys=0.54%, ctx=824, majf=0, minf=1 00:23:13.021 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.021 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.021 issued rwts: total=0,745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.021 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.021 job5: (groupid=0, jobs=1): err= 0: pid=81493: Thu Jul 25 09:05:18 2024 00:23:13.021 write: IOPS=73, BW=18.4MiB/s (19.3MB/s)(187MiB/10170msec); 0 zone resets 00:23:13.021 slat (usec): min=30, max=505, avg=92.12, stdev=33.10 00:23:13.021 clat (msec): min=24, max=395, avg=217.50, stdev=21.64 00:23:13.021 lat (msec): min=25, max=395, avg=217.59, stdev=21.64 00:23:13.021 clat percentiles (msec): 00:23:13.021 | 1.00th=[ 121], 5.00th=[ 211], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.021 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.021 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 234], 00:23:13.021 | 99.00th=[ 279], 99.50th=[ 300], 99.90th=[ 397], 99.95th=[ 397], 00:23:13.021 | 99.99th=[ 397] 00:23:13.021 bw ( KiB/s): min=17408, max=19456, per=3.33%, avg=18711.55, stdev=449.12, samples=20 00:23:13.021 iops : min= 68, max= 76, avg=73.00, stdev= 1.75, samples=20 00:23:13.021 lat (msec) : 50=0.27%, 100=0.54%, 250=96.52%, 500=2.68% 00:23:13.021 cpu : usr=0.37%, sys=0.38%, ctx=765, majf=0, minf=1 00:23:13.021 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.021 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.021 issued rwts: total=0,747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.021 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.021 job6: (groupid=0, jobs=1): err= 0: pid=81494: Thu Jul 25 09:05:18 2024 00:23:13.021 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10171msec); 0 zone resets 00:23:13.021 slat (usec): min=20, max=193, avg=79.06, stdev=18.21 00:23:13.021 clat (msec): min=22, max=425, avg=218.10, stdev=24.54 00:23:13.021 lat (msec): min=22, max=425, avg=218.18, stdev=24.54 00:23:13.021 clat percentiles (msec): 00:23:13.021 | 1.00th=[ 116], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.021 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.021 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 234], 00:23:13.021 | 99.00th=[ 288], 99.50th=[ 384], 99.90th=[ 426], 99.95th=[ 426], 00:23:13.021 | 99.99th=[ 426] 00:23:13.021 bw ( KiB/s): min=16896, max=19456, per=3.33%, avg=18687.85, stdev=534.81, samples=20 00:23:13.021 iops : min= 66, max= 76, avg=72.95, stdev= 2.09, samples=20 00:23:13.021 lat (msec) : 50=0.40%, 100=0.40%, 250=96.51%, 500=2.68% 00:23:13.021 cpu : usr=0.40%, sys=0.38%, ctx=747, majf=0, minf=1 00:23:13.021 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.021 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.021 issued rwts: total=0,745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.021 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.021 job7: (groupid=0, jobs=1): err= 0: pid=81525: Thu Jul 25 09:05:18 2024 00:23:13.021 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10172msec); 0 zone resets 00:23:13.021 slat (usec): min=27, max=261, avg=74.17, stdev=18.26 00:23:13.021 clat (msec): min=24, max=425, avg=218.13, stdev=24.32 00:23:13.021 lat (msec): min=24, max=425, avg=218.20, stdev=24.32 00:23:13.021 clat percentiles (msec): 00:23:13.021 | 1.00th=[ 120], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.021 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.021 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 234], 00:23:13.021 | 99.00th=[ 300], 99.50th=[ 384], 99.90th=[ 426], 99.95th=[ 426], 00:23:13.021 | 99.99th=[ 426] 00:23:13.021 bw ( KiB/s): min=16896, max=19456, per=3.33%, avg=18691.70, stdev=540.24, samples=20 00:23:13.021 iops : min= 66, max= 76, avg=73.00, stdev= 2.10, samples=20 00:23:13.021 lat (msec) : 50=0.27%, 100=0.54%, 250=96.51%, 500=2.68% 00:23:13.021 cpu : usr=0.36%, sys=0.40%, ctx=747, majf=0, minf=1 00:23:13.021 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.021 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.021 issued rwts: total=0,745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.021 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.021 job8: (groupid=0, jobs=1): err= 0: pid=81526: Thu Jul 25 09:05:18 2024 00:23:13.021 write: IOPS=73, BW=18.4MiB/s (19.3MB/s)(188MiB/10193msec); 0 zone resets 00:23:13.021 slat (usec): min=25, max=180, avg=67.99, stdev=18.36 00:23:13.021 clat (msec): min=2, max=432, avg=216.54, stdev=30.88 00:23:13.021 lat (msec): min=2, max=432, avg=216.61, stdev=30.89 00:23:13.021 clat percentiles (msec): 00:23:13.021 | 1.00th=[ 43], 5.00th=[ 211], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.021 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.021 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 230], 00:23:13.021 | 99.00th=[ 338], 99.50th=[ 393], 99.90th=[ 435], 99.95th=[ 435], 00:23:13.021 | 99.99th=[ 435] 00:23:13.021 bw ( KiB/s): min=16351, max=20992, per=3.35%, avg=18822.85, stdev=826.00, samples=20 00:23:13.021 iops : min= 63, max= 82, avg=73.20, stdev= 3.33, samples=20 00:23:13.021 lat (msec) : 4=0.13%, 10=0.27%, 20=0.13%, 50=0.66%, 100=0.40% 00:23:13.021 lat (msec) : 250=96.28%, 500=2.13% 00:23:13.021 cpu : usr=0.27%, sys=0.37%, ctx=752, majf=0, minf=1 00:23:13.021 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.021 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.021 issued rwts: total=0,752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.021 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.021 job9: (groupid=0, jobs=1): err= 0: pid=81527: Thu Jul 25 09:05:18 2024 00:23:13.021 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(187MiB/10171msec); 0 zone resets 00:23:13.021 slat (usec): min=26, max=666, avg=62.91, stdev=30.80 00:23:13.021 clat (msec): min=24, max=410, avg=217.85, stdev=22.93 00:23:13.021 lat (msec): min=24, max=410, avg=217.91, stdev=22.93 00:23:13.021 clat percentiles (msec): 00:23:13.021 | 1.00th=[ 120], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.021 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.021 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 234], 00:23:13.021 | 99.00th=[ 288], 99.50th=[ 372], 99.90th=[ 409], 99.95th=[ 409], 00:23:13.021 | 99.99th=[ 409] 00:23:13.021 bw ( KiB/s): min=16896, max=19456, per=3.33%, avg=18682.40, stdev=539.25, samples=20 00:23:13.021 iops : min= 66, max= 76, avg=72.85, stdev= 2.16, samples=20 00:23:13.021 lat (msec) : 50=0.27%, 100=0.54%, 250=96.51%, 500=2.68% 00:23:13.021 cpu : usr=0.28%, sys=0.30%, ctx=767, majf=0, minf=1 00:23:13.021 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.022 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.022 issued rwts: total=0,746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.022 job10: (groupid=0, jobs=1): err= 0: pid=81528: Thu Jul 25 09:05:18 2024 00:23:13.022 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10174msec); 0 zone resets 00:23:13.022 slat (usec): min=31, max=2228, avg=83.48, stdev=102.74 00:23:13.022 clat (msec): min=24, max=426, avg=218.16, stdev=24.42 00:23:13.022 lat (msec): min=24, max=426, avg=218.25, stdev=24.43 00:23:13.022 clat percentiles (msec): 00:23:13.022 | 1.00th=[ 120], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.022 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.022 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 226], 00:23:13.022 | 99.00th=[ 300], 99.50th=[ 388], 99.90th=[ 426], 99.95th=[ 426], 00:23:13.022 | 99.99th=[ 426] 00:23:13.022 bw ( KiB/s): min=16896, max=19456, per=3.33%, avg=18685.95, stdev=533.92, samples=20 00:23:13.022 iops : min= 66, max= 76, avg=72.90, stdev= 2.07, samples=20 00:23:13.022 lat (msec) : 50=0.27%, 100=0.54%, 250=96.51%, 500=2.68% 00:23:13.022 cpu : usr=0.35%, sys=0.43%, ctx=771, majf=0, minf=1 00:23:13.022 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.022 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.022 issued rwts: total=0,745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.022 job11: (groupid=0, jobs=1): err= 0: pid=81529: Thu Jul 25 09:05:18 2024 00:23:13.022 write: IOPS=73, BW=18.4MiB/s (19.3MB/s)(187MiB/10176msec); 0 zone resets 00:23:13.022 slat (usec): min=26, max=317, avg=72.42, stdev=21.66 00:23:13.022 clat (msec): min=7, max=443, avg=217.05, stdev=31.35 00:23:13.022 lat (msec): min=7, max=443, avg=217.12, stdev=31.35 00:23:13.022 clat percentiles (msec): 00:23:13.022 | 1.00th=[ 49], 5.00th=[ 211], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.022 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.022 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 226], 00:23:13.022 | 99.00th=[ 351], 99.50th=[ 405], 99.90th=[ 443], 99.95th=[ 443], 00:23:13.022 | 99.99th=[ 443] 00:23:13.022 bw ( KiB/s): min=15903, max=20521, per=3.34%, avg=18782.55, stdev=844.15, samples=20 00:23:13.022 iops : min= 62, max= 80, avg=73.10, stdev= 3.26, samples=20 00:23:13.022 lat (msec) : 10=0.27%, 20=0.27%, 50=0.53%, 100=0.40%, 250=96.53% 00:23:13.022 lat (msec) : 500=2.00% 00:23:13.022 cpu : usr=0.24%, sys=0.48%, ctx=754, majf=0, minf=1 00:23:13.022 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.022 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.022 issued rwts: total=0,749,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.022 job12: (groupid=0, jobs=1): err= 0: pid=81530: Thu Jul 25 09:05:18 2024 00:23:13.022 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10175msec); 0 zone resets 00:23:13.022 slat (usec): min=25, max=439, avg=75.33, stdev=25.59 00:23:13.022 clat (msec): min=15, max=452, avg=218.48, stdev=28.02 00:23:13.022 lat (msec): min=15, max=452, avg=218.56, stdev=28.03 00:23:13.022 clat percentiles (msec): 00:23:13.022 | 1.00th=[ 107], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.022 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.022 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 228], 00:23:13.022 | 99.00th=[ 359], 99.50th=[ 414], 99.90th=[ 451], 99.95th=[ 451], 00:23:13.022 | 99.99th=[ 451] 00:23:13.022 bw ( KiB/s): min=15840, max=19456, per=3.32%, avg=18655.10, stdev=718.36, samples=20 00:23:13.022 iops : min= 61, max= 76, avg=72.70, stdev= 2.96, samples=20 00:23:13.022 lat (msec) : 20=0.13%, 50=0.27%, 100=0.54%, 250=96.37%, 500=2.69% 00:23:13.022 cpu : usr=0.32%, sys=0.33%, ctx=747, majf=0, minf=1 00:23:13.022 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.022 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.022 issued rwts: total=0,744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.022 job13: (groupid=0, jobs=1): err= 0: pid=81531: Thu Jul 25 09:05:18 2024 00:23:13.022 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10170msec); 0 zone resets 00:23:13.022 slat (usec): min=23, max=163, avg=64.35, stdev=18.48 00:23:13.022 clat (msec): min=15, max=459, avg=218.40, stdev=27.44 00:23:13.022 lat (msec): min=15, max=459, avg=218.46, stdev=27.45 00:23:13.022 clat percentiles (msec): 00:23:13.022 | 1.00th=[ 109], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.022 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.022 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 226], 00:23:13.022 | 99.00th=[ 355], 99.50th=[ 405], 99.90th=[ 460], 99.95th=[ 460], 00:23:13.022 | 99.99th=[ 460] 00:23:13.022 bw ( KiB/s): min=15872, max=19456, per=3.32%, avg=18660.50, stdev=732.35, samples=20 00:23:13.022 iops : min= 62, max= 76, avg=72.85, stdev= 2.85, samples=20 00:23:13.022 lat (msec) : 20=0.13%, 50=0.27%, 100=0.54%, 250=96.64%, 500=2.42% 00:23:13.022 cpu : usr=0.22%, sys=0.40%, ctx=746, majf=0, minf=1 00:23:13.022 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.022 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.022 issued rwts: total=0,744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.022 job14: (groupid=0, jobs=1): err= 0: pid=81532: Thu Jul 25 09:05:18 2024 00:23:13.022 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10176msec); 0 zone resets 00:23:13.022 slat (usec): min=26, max=388, avg=80.95, stdev=26.11 00:23:13.022 clat (msec): min=21, max=429, avg=218.20, stdev=24.84 00:23:13.022 lat (msec): min=21, max=429, avg=218.28, stdev=24.84 00:23:13.022 clat percentiles (msec): 00:23:13.022 | 1.00th=[ 116], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.022 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.022 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 234], 00:23:13.022 | 99.00th=[ 288], 99.50th=[ 388], 99.90th=[ 430], 99.95th=[ 430], 00:23:13.022 | 99.99th=[ 430] 00:23:13.022 bw ( KiB/s): min=16896, max=19456, per=3.33%, avg=18686.10, stdev=562.48, samples=20 00:23:13.022 iops : min= 66, max= 76, avg=72.95, stdev= 2.19, samples=20 00:23:13.022 lat (msec) : 50=0.40%, 100=0.40%, 250=96.51%, 500=2.68% 00:23:13.022 cpu : usr=0.28%, sys=0.53%, ctx=755, majf=0, minf=1 00:23:13.022 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.022 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.022 issued rwts: total=0,745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.022 job15: (groupid=0, jobs=1): err= 0: pid=81533: Thu Jul 25 09:05:18 2024 00:23:13.022 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10173msec); 0 zone resets 00:23:13.022 slat (usec): min=39, max=276, avg=78.35, stdev=17.91 00:23:13.022 clat (msec): min=7, max=452, avg=218.17, stdev=29.15 00:23:13.022 lat (msec): min=7, max=452, avg=218.25, stdev=29.15 00:23:13.022 clat percentiles (msec): 00:23:13.022 | 1.00th=[ 91], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.022 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.022 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 232], 00:23:13.022 | 99.00th=[ 359], 99.50th=[ 414], 99.90th=[ 451], 99.95th=[ 451], 00:23:13.022 | 99.99th=[ 451] 00:23:13.022 bw ( KiB/s): min=15329, max=19417, per=3.32%, avg=18653.20, stdev=824.48, samples=20 00:23:13.022 iops : min= 59, max= 75, avg=72.65, stdev= 3.34, samples=20 00:23:13.022 lat (msec) : 10=0.13%, 20=0.13%, 50=0.27%, 100=0.54%, 250=96.51% 00:23:13.022 lat (msec) : 500=2.42% 00:23:13.022 cpu : usr=0.29%, sys=0.45%, ctx=750, majf=0, minf=1 00:23:13.022 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.022 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.022 issued rwts: total=0,745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.022 job16: (groupid=0, jobs=1): err= 0: pid=81534: Thu Jul 25 09:05:18 2024 00:23:13.022 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10167msec); 0 zone resets 00:23:13.022 slat (usec): min=23, max=166, avg=68.67, stdev=16.09 00:23:13.022 clat (msec): min=24, max=420, avg=218.05, stdev=23.94 00:23:13.022 lat (msec): min=24, max=420, avg=218.12, stdev=23.94 00:23:13.022 clat percentiles (msec): 00:23:13.022 | 1.00th=[ 120], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.022 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.022 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 228], 00:23:13.022 | 99.00th=[ 296], 99.50th=[ 380], 99.90th=[ 422], 99.95th=[ 422], 00:23:13.022 | 99.99th=[ 422] 00:23:13.022 bw ( KiB/s): min=16896, max=19456, per=3.33%, avg=18685.95, stdev=533.92, samples=20 00:23:13.022 iops : min= 66, max= 76, avg=72.90, stdev= 2.07, samples=20 00:23:13.022 lat (msec) : 50=0.27%, 100=0.54%, 250=96.64%, 500=2.55% 00:23:13.022 cpu : usr=0.34%, sys=0.33%, ctx=750, majf=0, minf=1 00:23:13.022 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.022 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.022 issued rwts: total=0,745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.022 job17: (groupid=0, jobs=1): err= 0: pid=81535: Thu Jul 25 09:05:18 2024 00:23:13.022 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10169msec); 0 zone resets 00:23:13.022 slat (usec): min=26, max=388, avg=79.10, stdev=21.96 00:23:13.022 clat (msec): min=25, max=420, avg=218.06, stdev=23.97 00:23:13.023 lat (msec): min=25, max=420, avg=218.14, stdev=23.97 00:23:13.023 clat percentiles (msec): 00:23:13.023 | 1.00th=[ 121], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.023 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.023 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 234], 00:23:13.023 | 99.00th=[ 296], 99.50th=[ 380], 99.90th=[ 422], 99.95th=[ 422], 00:23:13.023 | 99.99th=[ 422] 00:23:13.023 bw ( KiB/s): min=16896, max=19456, per=3.33%, avg=18685.95, stdev=533.92, samples=20 00:23:13.023 iops : min= 66, max= 76, avg=72.90, stdev= 2.07, samples=20 00:23:13.023 lat (msec) : 50=0.27%, 100=0.54%, 250=96.51%, 500=2.68% 00:23:13.023 cpu : usr=0.31%, sys=0.50%, ctx=749, majf=0, minf=1 00:23:13.023 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.023 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.023 issued rwts: total=0,745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.023 job18: (groupid=0, jobs=1): err= 0: pid=81536: Thu Jul 25 09:05:18 2024 00:23:13.023 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(187MiB/10176msec); 0 zone resets 00:23:13.023 slat (usec): min=25, max=1140, avg=66.03, stdev=46.14 00:23:13.023 clat (msec): min=7, max=453, avg=217.91, stdev=30.19 00:23:13.023 lat (msec): min=9, max=454, avg=217.98, stdev=30.18 00:23:13.023 clat percentiles (msec): 00:23:13.023 | 1.00th=[ 79], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.023 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.023 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 228], 00:23:13.023 | 99.00th=[ 359], 99.50th=[ 414], 99.90th=[ 456], 99.95th=[ 456], 00:23:13.023 | 99.99th=[ 456] 00:23:13.023 bw ( KiB/s): min=15840, max=19456, per=3.33%, avg=18702.50, stdev=736.58, samples=20 00:23:13.023 iops : min= 61, max= 76, avg=72.80, stdev= 3.02, samples=20 00:23:13.023 lat (msec) : 10=0.13%, 20=0.13%, 50=0.40%, 100=0.54%, 250=96.78% 00:23:13.023 lat (msec) : 500=2.01% 00:23:13.023 cpu : usr=0.23%, sys=0.35%, ctx=772, majf=0, minf=1 00:23:13.023 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.023 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.023 issued rwts: total=0,746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.023 job19: (groupid=0, jobs=1): err= 0: pid=81537: Thu Jul 25 09:05:18 2024 00:23:13.023 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10176msec); 0 zone resets 00:23:13.023 slat (usec): min=24, max=799, avg=66.16, stdev=38.69 00:23:13.023 clat (msec): min=18, max=434, avg=218.21, stdev=25.58 00:23:13.023 lat (msec): min=18, max=434, avg=218.27, stdev=25.57 00:23:13.023 clat percentiles (msec): 00:23:13.023 | 1.00th=[ 113], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.023 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.023 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 230], 00:23:13.023 | 99.00th=[ 292], 99.50th=[ 393], 99.90th=[ 435], 99.95th=[ 435], 00:23:13.023 | 99.99th=[ 435] 00:23:13.023 bw ( KiB/s): min=16384, max=19456, per=3.33%, avg=18688.00, stdev=674.76, samples=20 00:23:13.023 iops : min= 64, max= 76, avg=73.00, stdev= 2.64, samples=20 00:23:13.023 lat (msec) : 20=0.13%, 50=0.27%, 100=0.54%, 250=96.51%, 500=2.55% 00:23:13.023 cpu : usr=0.27%, sys=0.30%, ctx=779, majf=0, minf=1 00:23:13.023 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.023 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.023 issued rwts: total=0,745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.023 job20: (groupid=0, jobs=1): err= 0: pid=81540: Thu Jul 25 09:05:18 2024 00:23:13.023 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10177msec); 0 zone resets 00:23:13.023 slat (usec): min=21, max=212, avg=73.05, stdev=15.04 00:23:13.023 clat (msec): min=23, max=444, avg=218.54, stdev=26.19 00:23:13.023 lat (msec): min=23, max=444, avg=218.61, stdev=26.19 00:23:13.023 clat percentiles (msec): 00:23:13.023 | 1.00th=[ 118], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.023 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.023 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 232], 00:23:13.023 | 99.00th=[ 300], 99.50th=[ 405], 99.90th=[ 447], 99.95th=[ 447], 00:23:13.023 | 99.99th=[ 447] 00:23:13.023 bw ( KiB/s): min=16384, max=19456, per=3.32%, avg=18662.40, stdev=632.00, samples=20 00:23:13.023 iops : min= 64, max= 76, avg=72.90, stdev= 2.47, samples=20 00:23:13.023 lat (msec) : 50=0.27%, 100=0.54%, 250=96.51%, 500=2.69% 00:23:13.023 cpu : usr=0.30%, sys=0.35%, ctx=743, majf=0, minf=1 00:23:13.023 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.023 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.023 issued rwts: total=0,744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.023 job21: (groupid=0, jobs=1): err= 0: pid=81551: Thu Jul 25 09:05:18 2024 00:23:13.023 write: IOPS=73, BW=18.4MiB/s (19.3MB/s)(188MiB/10190msec); 0 zone resets 00:23:13.023 slat (usec): min=31, max=335, avg=62.41, stdev=22.33 00:23:13.023 clat (msec): min=3, max=455, avg=217.07, stdev=32.83 00:23:13.023 lat (msec): min=3, max=455, avg=217.13, stdev=32.83 00:23:13.023 clat percentiles (msec): 00:23:13.023 | 1.00th=[ 40], 5.00th=[ 211], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.023 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.023 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 232], 00:23:13.023 | 99.00th=[ 363], 99.50th=[ 414], 99.90th=[ 456], 99.95th=[ 456], 00:23:13.023 | 99.99th=[ 456] 00:23:13.023 bw ( KiB/s): min=15903, max=20992, per=3.34%, avg=18780.55, stdev=888.86, samples=20 00:23:13.023 iops : min= 62, max= 82, avg=73.10, stdev= 3.48, samples=20 00:23:13.023 lat (msec) : 4=0.13%, 10=0.27%, 20=0.27%, 50=0.40%, 100=0.53% 00:23:13.023 lat (msec) : 250=95.87%, 500=2.53% 00:23:13.023 cpu : usr=0.27%, sys=0.30%, ctx=775, majf=0, minf=1 00:23:13.023 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.023 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.023 issued rwts: total=0,750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.023 job22: (groupid=0, jobs=1): err= 0: pid=81556: Thu Jul 25 09:05:18 2024 00:23:13.023 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10182msec); 0 zone resets 00:23:13.023 slat (usec): min=27, max=955, avg=81.42, stdev=41.30 00:23:13.023 clat (msec): min=22, max=436, avg=218.32, stdev=25.35 00:23:13.023 lat (msec): min=23, max=436, avg=218.40, stdev=25.34 00:23:13.023 clat percentiles (msec): 00:23:13.023 | 1.00th=[ 117], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.023 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.023 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 234], 00:23:13.023 | 99.00th=[ 300], 99.50th=[ 397], 99.90th=[ 435], 99.95th=[ 435], 00:23:13.023 | 99.99th=[ 435] 00:23:13.023 bw ( KiB/s): min=16896, max=19456, per=3.33%, avg=18684.25, stdev=563.42, samples=20 00:23:13.023 iops : min= 66, max= 76, avg=72.90, stdev= 2.22, samples=20 00:23:13.023 lat (msec) : 50=0.40%, 100=0.40%, 250=96.51%, 500=2.68% 00:23:13.023 cpu : usr=0.34%, sys=0.45%, ctx=760, majf=0, minf=1 00:23:13.023 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.023 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.023 issued rwts: total=0,745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.023 job23: (groupid=0, jobs=1): err= 0: pid=81575: Thu Jul 25 09:05:18 2024 00:23:13.023 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10179msec); 0 zone resets 00:23:13.023 slat (usec): min=24, max=1797, avg=72.28, stdev=69.03 00:23:13.023 clat (msec): min=15, max=436, avg=218.23, stdev=26.03 00:23:13.023 lat (msec): min=16, max=437, avg=218.30, stdev=26.01 00:23:13.023 clat percentiles (msec): 00:23:13.023 | 1.00th=[ 112], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.023 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.023 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 230], 00:23:13.023 | 99.00th=[ 296], 99.50th=[ 397], 99.90th=[ 439], 99.95th=[ 439], 00:23:13.023 | 99.99th=[ 439] 00:23:13.023 bw ( KiB/s): min=16384, max=19456, per=3.33%, avg=18688.00, stdev=674.76, samples=20 00:23:13.023 iops : min= 64, max= 76, avg=73.00, stdev= 2.64, samples=20 00:23:13.023 lat (msec) : 20=0.13%, 50=0.27%, 100=0.54%, 250=96.38%, 500=2.68% 00:23:13.023 cpu : usr=0.29%, sys=0.28%, ctx=750, majf=0, minf=1 00:23:13.023 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.023 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.023 issued rwts: total=0,745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.023 job24: (groupid=0, jobs=1): err= 0: pid=81579: Thu Jul 25 09:05:18 2024 00:23:13.023 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10172msec); 0 zone resets 00:23:13.023 slat (usec): min=22, max=533, avg=65.07, stdev=40.15 00:23:13.023 clat (msec): min=18, max=442, avg=218.43, stdev=26.46 00:23:13.023 lat (msec): min=19, max=442, avg=218.50, stdev=26.46 00:23:13.023 clat percentiles (msec): 00:23:13.023 | 1.00th=[ 114], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.023 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.023 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 230], 00:23:13.023 | 99.00th=[ 351], 99.50th=[ 405], 99.90th=[ 443], 99.95th=[ 443], 00:23:13.023 | 99.99th=[ 443] 00:23:13.023 bw ( KiB/s): min=15872, max=19456, per=3.32%, avg=18662.40, stdev=769.79, samples=20 00:23:13.023 iops : min= 62, max= 76, avg=72.90, stdev= 3.01, samples=20 00:23:13.024 lat (msec) : 20=0.13%, 50=0.27%, 100=0.54%, 250=96.64%, 500=2.42% 00:23:13.024 cpu : usr=0.30%, sys=0.25%, ctx=785, majf=0, minf=1 00:23:13.024 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.024 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.024 issued rwts: total=0,744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.024 job25: (groupid=0, jobs=1): err= 0: pid=81608: Thu Jul 25 09:05:18 2024 00:23:13.024 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(187MiB/10168msec); 0 zone resets 00:23:13.024 slat (usec): min=26, max=162, avg=74.80, stdev=16.69 00:23:13.024 clat (msec): min=23, max=408, avg=217.77, stdev=22.96 00:23:13.024 lat (msec): min=23, max=408, avg=217.85, stdev=22.96 00:23:13.024 clat percentiles (msec): 00:23:13.024 | 1.00th=[ 120], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.024 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.024 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 228], 00:23:13.024 | 99.00th=[ 284], 99.50th=[ 368], 99.90th=[ 409], 99.95th=[ 409], 00:23:13.024 | 99.99th=[ 409] 00:23:13.024 bw ( KiB/s): min=17408, max=19456, per=3.33%, avg=18709.80, stdev=452.28, samples=20 00:23:13.024 iops : min= 68, max= 76, avg=73.00, stdev= 1.75, samples=20 00:23:13.024 lat (msec) : 50=0.27%, 100=0.54%, 250=96.51%, 500=2.68% 00:23:13.024 cpu : usr=0.31%, sys=0.46%, ctx=750, majf=0, minf=1 00:23:13.024 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.024 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.024 issued rwts: total=0,746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.024 job26: (groupid=0, jobs=1): err= 0: pid=81678: Thu Jul 25 09:05:18 2024 00:23:13.024 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(187MiB/10187msec); 0 zone resets 00:23:13.024 slat (usec): min=31, max=271, avg=73.55, stdev=17.12 00:23:13.024 clat (msec): min=6, max=455, avg=217.83, stdev=30.52 00:23:13.024 lat (msec): min=7, max=455, avg=217.91, stdev=30.52 00:23:13.024 clat percentiles (msec): 00:23:13.024 | 1.00th=[ 73], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.024 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.024 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 232], 00:23:13.024 | 99.00th=[ 363], 99.50th=[ 418], 99.90th=[ 456], 99.95th=[ 456], 00:23:13.024 | 99.99th=[ 456] 00:23:13.024 bw ( KiB/s): min=15329, max=19968, per=3.33%, avg=18702.55, stdev=920.46, samples=20 00:23:13.024 iops : min= 59, max= 78, avg=72.80, stdev= 3.74, samples=20 00:23:13.024 lat (msec) : 10=0.13%, 20=0.27%, 50=0.40%, 100=0.54%, 250=96.12% 00:23:13.024 lat (msec) : 500=2.54% 00:23:13.024 cpu : usr=0.30%, sys=0.44%, ctx=757, majf=0, minf=1 00:23:13.024 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.024 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.024 issued rwts: total=0,747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.024 job27: (groupid=0, jobs=1): err= 0: pid=81684: Thu Jul 25 09:05:18 2024 00:23:13.024 write: IOPS=73, BW=18.4MiB/s (19.3MB/s)(188MiB/10199msec); 0 zone resets 00:23:13.024 slat (usec): min=26, max=200, avg=58.71, stdev=17.78 00:23:13.024 clat (msec): min=9, max=444, avg=216.99, stdev=31.21 00:23:13.024 lat (msec): min=9, max=444, avg=217.05, stdev=31.21 00:23:13.024 clat percentiles (msec): 00:23:13.024 | 1.00th=[ 47], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.024 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.024 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 234], 00:23:13.024 | 99.00th=[ 351], 99.50th=[ 405], 99.90th=[ 443], 99.95th=[ 443], 00:23:13.024 | 99.99th=[ 443] 00:23:13.024 bw ( KiB/s): min=16351, max=20992, per=3.35%, avg=18797.30, stdev=814.48, samples=20 00:23:13.024 iops : min= 63, max= 82, avg=73.10, stdev= 3.31, samples=20 00:23:13.024 lat (msec) : 10=0.13%, 20=0.40%, 50=0.53%, 100=0.53%, 250=96.01% 00:23:13.024 lat (msec) : 500=2.40% 00:23:13.024 cpu : usr=0.25%, sys=0.30%, ctx=767, majf=0, minf=1 00:23:13.024 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.024 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.024 issued rwts: total=0,751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.024 job28: (groupid=0, jobs=1): err= 0: pid=81696: Thu Jul 25 09:05:18 2024 00:23:13.024 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10179msec); 0 zone resets 00:23:13.024 slat (usec): min=26, max=283, avg=73.38, stdev=22.56 00:23:13.024 clat (msec): min=21, max=434, avg=218.28, stdev=25.38 00:23:13.024 lat (msec): min=21, max=434, avg=218.35, stdev=25.39 00:23:13.024 clat percentiles (msec): 00:23:13.024 | 1.00th=[ 116], 5.00th=[ 213], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.024 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.024 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 234], 00:23:13.024 | 99.00th=[ 300], 99.50th=[ 397], 99.90th=[ 435], 99.95th=[ 435], 00:23:13.024 | 99.99th=[ 435] 00:23:13.024 bw ( KiB/s): min=16384, max=19456, per=3.33%, avg=18687.85, stdev=672.00, samples=20 00:23:13.024 iops : min= 64, max= 76, avg=72.95, stdev= 2.63, samples=20 00:23:13.024 lat (msec) : 50=0.40%, 100=0.40%, 250=96.51%, 500=2.68% 00:23:13.024 cpu : usr=0.33%, sys=0.38%, ctx=756, majf=0, minf=1 00:23:13.024 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.024 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.024 issued rwts: total=0,745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.024 job29: (groupid=0, jobs=1): err= 0: pid=81697: Thu Jul 25 09:05:18 2024 00:23:13.024 write: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10178msec); 0 zone resets 00:23:13.024 slat (usec): min=30, max=360, avg=63.91, stdev=20.67 00:23:13.024 clat (msec): min=23, max=431, avg=218.25, stdev=25.06 00:23:13.024 lat (msec): min=23, max=431, avg=218.32, stdev=25.05 00:23:13.024 clat percentiles (msec): 00:23:13.024 | 1.00th=[ 117], 5.00th=[ 211], 10.00th=[ 213], 20.00th=[ 215], 00:23:13.024 | 30.00th=[ 218], 40.00th=[ 218], 50.00th=[ 218], 60.00th=[ 218], 00:23:13.024 | 70.00th=[ 220], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 228], 00:23:13.024 | 99.00th=[ 296], 99.50th=[ 393], 99.90th=[ 430], 99.95th=[ 430], 00:23:13.024 | 99.99th=[ 430] 00:23:13.024 bw ( KiB/s): min=16896, max=19456, per=3.33%, avg=18687.85, stdev=534.81, samples=20 00:23:13.024 iops : min= 66, max= 76, avg=72.95, stdev= 2.09, samples=20 00:23:13.024 lat (msec) : 50=0.40%, 100=0.40%, 250=96.51%, 500=2.68% 00:23:13.024 cpu : usr=0.30%, sys=0.29%, ctx=762, majf=0, minf=1 00:23:13.024 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:23:13.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.024 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.024 issued rwts: total=0,745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.024 00:23:13.024 Run status group 0 (all jobs): 00:23:13.024 WRITE: bw=549MiB/s (575MB/s), 18.3MiB/s-18.4MiB/s (19.2MB/s-19.3MB/s), io=5596MiB (5867MB), run=10167-10199msec 00:23:13.024 00:23:13.024 Disk stats (read/write): 00:23:13.024 sda: ios=48/740, merge=0/0, ticks=258/158916, in_queue=159173, util=97.34% 00:23:13.024 sdb: ios=48/737, merge=0/0, ticks=295/158965, in_queue=159260, util=97.26% 00:23:13.024 sdc: ios=48/732, merge=0/0, ticks=332/158410, in_queue=158742, util=97.37% 00:23:13.024 sdd: ios=48/731, merge=0/0, ticks=299/158344, in_queue=158643, util=97.29% 00:23:13.024 sde: ios=48/731, merge=0/0, ticks=291/158299, in_queue=158590, util=97.59% 00:23:13.024 sdf: ios=48/732, merge=0/0, ticks=285/158485, in_queue=158770, util=97.45% 00:23:13.024 sdg: ios=48/733, merge=0/0, ticks=311/158683, in_queue=158993, util=97.65% 00:23:13.024 sdh: ios=48/731, merge=0/0, ticks=308/158329, in_queue=158637, util=97.82% 00:23:13.024 sdi: ios=48/742, merge=0/0, ticks=289/159243, in_queue=159531, util=98.51% 00:23:13.024 sdj: ios=39/731, merge=0/0, ticks=303/158231, in_queue=158535, util=97.68% 00:23:13.024 sdk: ios=39/731, merge=0/0, ticks=252/158409, in_queue=158661, util=97.78% 00:23:13.024 sdl: ios=35/740, merge=0/0, ticks=274/159126, in_queue=159400, util=98.37% 00:23:13.024 sdm: ios=33/733, merge=0/0, ticks=295/158540, in_queue=158836, util=98.22% 00:23:13.024 sdn: ios=28/734, merge=0/0, ticks=253/158811, in_queue=159063, util=98.03% 00:23:13.024 sdo: ios=27/732, merge=0/0, ticks=259/158466, in_queue=158725, util=97.95% 00:23:13.024 sdp: ios=23/735, merge=0/0, ticks=258/158727, in_queue=158985, util=98.38% 00:23:13.024 sdq: ios=20/730, merge=0/0, ticks=209/158099, in_queue=158308, util=97.67% 00:23:13.024 sdr: ios=21/731, merge=0/0, ticks=249/158280, in_queue=158529, util=98.00% 00:23:13.024 sds: ios=19/736, merge=0/0, ticks=250/158678, in_queue=158928, util=98.25% 00:23:13.024 sdt: ios=19/733, merge=0/0, ticks=219/158513, in_queue=158732, util=98.25% 00:23:13.024 sdu: ios=19/732, merge=0/0, ticks=213/158543, in_queue=158757, util=98.06% 00:23:13.024 sdv: ios=19/740, merge=0/0, ticks=133/158943, in_queue=159076, util=98.52% 00:23:13.024 sdw: ios=0/732, merge=0/0, ticks=0/158516, in_queue=158515, util=97.93% 00:23:13.024 sdx: ios=0/733, merge=0/0, ticks=0/158570, in_queue=158570, util=98.03% 00:23:13.024 sdy: ios=5/733, merge=0/0, ticks=56/158560, in_queue=158615, util=98.15% 00:23:13.024 sdz: ios=0/732, merge=0/0, ticks=0/158596, in_queue=158595, util=97.96% 00:23:13.024 sdaa: ios=0/737, merge=0/0, ticks=0/158921, in_queue=158921, util=98.62% 00:23:13.024 sdab: ios=0/741, merge=0/0, ticks=0/159183, in_queue=159182, util=98.76% 00:23:13.024 sdac: ios=0/732, merge=0/0, ticks=0/158512, in_queue=158512, util=98.47% 00:23:13.024 sdad: ios=0/732, merge=0/0, ticks=0/158492, in_queue=158492, util=98.71% 00:23:13.024 [2024-07-25 09:05:18.519606] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.024 [2024-07-25 09:05:18.521835] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.024 [2024-07-25 09:05:18.523866] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.025 [2024-07-25 09:05:18.525768] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.025 [2024-07-25 09:05:18.527805] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.025 [2024-07-25 09:05:18.529954] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.025 [2024-07-25 09:05:18.532063] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.025 [2024-07-25 09:05:18.534120] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.025 [2024-07-25 09:05:18.537516] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.025 09:05:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@79 -- # sync 00:23:13.025 [2024-07-25 09:05:18.541734] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:13.025 09:05:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:23:13.025 09:05:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@83 -- # rm -f 00:23:13.025 09:05:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@84 -- # iscsicleanup 00:23:13.025 Cleaning up iSCSI connection 00:23:13.025 09:05:19 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:23:13.025 09:05:19 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:23:13.025 Logging out of session [sid: 41, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 42, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 43, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 44, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 45, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 46, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 47, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 48, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 49, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 50, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 51, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 52, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 53, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 54, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 55, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 56, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 57, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 58, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 59, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 60, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 61, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 62, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 63, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 64, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 65, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 66, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 67, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 68, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 69, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:23:13.025 Logging out of session [sid: 70, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:23:13.025 Logout of [sid: 41, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:23:13.025 Logout of [sid: 42, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:23:13.025 Logout of [sid: 43, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:23:13.025 Logout of [sid: 44, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:23:13.025 Logout of [sid: 45, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:23:13.025 Logout of [sid: 46, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:23:13.025 Logout of [sid: 47, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:23:13.025 Logout of [sid: 48, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:23:13.025 Logout of [sid: 49, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:23:13.025 Logout of [sid: 50, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:23:13.025 Logout of [sid: 51, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:23:13.025 Logout of [sid: 52, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:23:13.025 Logout of [sid: 53, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 54, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 55, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 56, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 57, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 58, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 59, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 60, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 61, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 62, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 63, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 64, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 65, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 66, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 67, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 68, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 69, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:23:13.285 Logout of [sid: 70, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:23:13.285 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:23:13.285 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@985 -- # rm -rf 00:23:13.285 INFO: Removing lvol bdevs 00:23:13.285 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@85 -- # remove_backends 00:23:13.285 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@22 -- # echo 'INFO: Removing lvol bdevs' 00:23:13.285 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # seq 1 30 00:23:13.285 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:13.285 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_1 00:23:13.285 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_1 00:23:13.545 [2024-07-25 09:05:20.422308] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0377c2c6-6300-4100-8fec-d9efc6170504) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:13.545 INFO: lvol bdev lvs0/lbd_1 removed 00:23:13.545 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_1 removed' 00:23:13.545 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:13.545 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_2 00:23:13.545 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_2 00:23:13.545 [2024-07-25 09:05:20.617934] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (e4bb7038-6aa0-45cc-b6e6-f7f23a15a40f) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:13.545 INFO: lvol bdev lvs0/lbd_2 removed 00:23:13.545 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_2 removed' 00:23:13.545 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:13.545 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_3 00:23:13.545 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_3 00:23:13.805 [2024-07-25 09:05:20.813701] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (f81c471c-5bb0-4687-96de-3714179ee2c0) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:13.805 INFO: lvol bdev lvs0/lbd_3 removed 00:23:13.805 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_3 removed' 00:23:13.805 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:13.805 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_4 00:23:13.805 09:05:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_4 00:23:14.065 [2024-07-25 09:05:21.001361] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (dac20a4d-28c9-45b3-84a9-e0f74cc37f94) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:14.065 INFO: lvol bdev lvs0/lbd_4 removed 00:23:14.065 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_4 removed' 00:23:14.065 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:14.065 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_5 00:23:14.065 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_5 00:23:14.065 [2024-07-25 09:05:21.165097] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (6f479dd7-652d-454f-8f50-bfe90363f4d0) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:14.065 INFO: lvol bdev lvs0/lbd_5 removed 00:23:14.065 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_5 removed' 00:23:14.065 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:14.065 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_6 00:23:14.065 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_6 00:23:14.325 [2024-07-25 09:05:21.352797] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (2b0943e8-602b-408f-bc2c-14369e41ef15) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:14.325 INFO: lvol bdev lvs0/lbd_6 removed 00:23:14.325 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_6 removed' 00:23:14.325 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:14.325 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_7 00:23:14.325 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_7 00:23:14.585 [2024-07-25 09:05:21.540610] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (2c7389f7-3982-4111-b418-c3253956cc3e) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:14.585 INFO: lvol bdev lvs0/lbd_7 removed 00:23:14.585 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_7 removed' 00:23:14.585 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:14.585 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_8 00:23:14.585 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_8 00:23:14.845 [2024-07-25 09:05:21.736345] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (62787ff3-5807-46bc-9e7a-632adacbc5a7) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:14.845 INFO: lvol bdev lvs0/lbd_8 removed 00:23:14.845 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_8 removed' 00:23:14.845 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:14.845 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_9 00:23:14.845 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_9 00:23:14.845 [2024-07-25 09:05:21.920063] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b6c3517d-c5a3-4bca-ab8e-53481a080b42) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:14.845 INFO: lvol bdev lvs0/lbd_9 removed 00:23:14.845 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_9 removed' 00:23:14.845 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:14.845 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_10 00:23:14.845 09:05:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_10 00:23:15.105 [2024-07-25 09:05:22.103755] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (78723b1d-1828-43e2-a200-31571afc0d40) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:15.105 INFO: lvol bdev lvs0/lbd_10 removed 00:23:15.105 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_10 removed' 00:23:15.105 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:15.105 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_11 00:23:15.105 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_11 00:23:15.364 [2024-07-25 09:05:22.279561] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b194d033-67a7-4c5c-a724-1f1328da23dd) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:15.364 INFO: lvol bdev lvs0/lbd_11 removed 00:23:15.364 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_11 removed' 00:23:15.364 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:15.364 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_12 00:23:15.364 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_12 00:23:15.364 [2024-07-25 09:05:22.467257] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (4307361d-7e4d-4646-b6ba-f3b3781205bd) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:15.624 INFO: lvol bdev lvs0/lbd_12 removed 00:23:15.624 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_12 removed' 00:23:15.624 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:15.624 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_13 00:23:15.624 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_13 00:23:15.624 [2024-07-25 09:05:22.650990] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (892a30e6-0330-43cb-97ab-924ff368b9c3) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:15.624 INFO: lvol bdev lvs0/lbd_13 removed 00:23:15.624 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_13 removed' 00:23:15.624 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:15.624 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_14 00:23:15.624 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_14 00:23:15.884 [2024-07-25 09:05:22.846718] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (e5538db4-9d8b-4bf3-a97c-74e567f15f6e) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:15.884 INFO: lvol bdev lvs0/lbd_14 removed 00:23:15.884 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_14 removed' 00:23:15.884 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:15.884 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_15 00:23:15.884 09:05:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_15 00:23:16.143 [2024-07-25 09:05:23.058398] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (fd092222-6499-4f9b-bb62-4dc144e48a6e) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:16.143 INFO: lvol bdev lvs0/lbd_15 removed 00:23:16.143 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_15 removed' 00:23:16.143 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:16.143 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_16 00:23:16.143 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_16 00:23:16.143 [2024-07-25 09:05:23.242102] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (bbb7b7e1-3623-4583-8888-b9820965d31c) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:16.401 INFO: lvol bdev lvs0/lbd_16 removed 00:23:16.401 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_16 removed' 00:23:16.401 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:16.401 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_17 00:23:16.401 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_17 00:23:16.401 [2024-07-25 09:05:23.425831] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (eebe88fb-4440-4cd2-9006-d07cf65a3f3c) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:16.401 INFO: lvol bdev lvs0/lbd_17 removed 00:23:16.401 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_17 removed' 00:23:16.402 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:16.402 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_18 00:23:16.402 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_18 00:23:16.659 [2024-07-25 09:05:23.621539] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (5b134293-a608-4816-a56b-10ed476cd9fa) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:16.659 INFO: lvol bdev lvs0/lbd_18 removed 00:23:16.659 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_18 removed' 00:23:16.659 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:16.659 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_19 00:23:16.659 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_19 00:23:16.918 [2024-07-25 09:05:23.817279] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (67620356-18df-47e6-b067-fe49d2a0ac02) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:16.918 INFO: lvol bdev lvs0/lbd_19 removed 00:23:16.918 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_19 removed' 00:23:16.918 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:16.918 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_20 00:23:16.918 09:05:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_20 00:23:16.918 [2024-07-25 09:05:24.004993] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (1cebc384-93a3-4d72-86f1-30b281a01d50) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:16.918 INFO: lvol bdev lvs0/lbd_20 removed 00:23:16.918 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_20 removed' 00:23:16.918 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:16.918 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_21 00:23:16.918 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_21 00:23:17.177 [2024-07-25 09:05:24.196698] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (e5c2fad1-9ea0-4531-a17a-21439235fceb) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:17.177 INFO: lvol bdev lvs0/lbd_21 removed 00:23:17.177 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_21 removed' 00:23:17.177 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:17.177 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_22 00:23:17.177 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_22 00:23:17.436 [2024-07-25 09:05:24.380534] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (72a680cd-a029-4ca0-bc3a-6f3fa0dfadab) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:17.436 INFO: lvol bdev lvs0/lbd_22 removed 00:23:17.436 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_22 removed' 00:23:17.436 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:17.436 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_23 00:23:17.436 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_23 00:23:17.695 [2024-07-25 09:05:24.572240] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (43178e56-0edc-40dd-b5cd-18044cf0deb0) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:17.695 INFO: lvol bdev lvs0/lbd_23 removed 00:23:17.695 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_23 removed' 00:23:17.695 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:17.695 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_24 00:23:17.695 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_24 00:23:17.695 [2024-07-25 09:05:24.767954] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (deaee400-a285-4dcd-acdf-4a11d2a9daef) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:17.695 INFO: lvol bdev lvs0/lbd_24 removed 00:23:17.695 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_24 removed' 00:23:17.695 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:17.695 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_25 00:23:17.695 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_25 00:23:17.954 [2024-07-25 09:05:24.947673] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (e896561c-f592-4b69-b1d3-202baac7e074) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:17.954 INFO: lvol bdev lvs0/lbd_25 removed 00:23:17.954 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_25 removed' 00:23:17.954 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:17.954 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_26 00:23:17.954 09:05:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_26 00:23:18.214 [2024-07-25 09:05:25.119382] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (2a4d141d-e99e-4926-bd8e-83bf800f04f3) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:18.214 INFO: lvol bdev lvs0/lbd_26 removed 00:23:18.214 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_26 removed' 00:23:18.214 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:18.214 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_27 00:23:18.214 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_27 00:23:18.214 [2024-07-25 09:05:25.311107] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0a25541c-1f00-4d23-9ce0-5d3fef147b94) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:18.214 INFO: lvol bdev lvs0/lbd_27 removed 00:23:18.214 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_27 removed' 00:23:18.214 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:18.214 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_28 00:23:18.214 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_28 00:23:18.473 [2024-07-25 09:05:25.502835] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (7a142bee-791a-424e-b9bf-091f32f0998c) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:18.473 INFO: lvol bdev lvs0/lbd_28 removed 00:23:18.473 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_28 removed' 00:23:18.473 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:18.473 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_29 00:23:18.473 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_29 00:23:18.733 [2024-07-25 09:05:25.706553] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (87da21d5-88eb-430d-93eb-c06d945b2880) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:18.733 INFO: lvol bdev lvs0/lbd_29 removed 00:23:18.733 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_29 removed' 00:23:18.733 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:18.733 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_30 00:23:18.733 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_30 00:23:18.992 [2024-07-25 09:05:25.922229] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (45dcbc4f-c1a0-43c7-be9a-d5ef94c6cacd) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:18.992 INFO: lvol bdev lvs0/lbd_30 removed 00:23:18.992 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_30 removed' 00:23:18.992 09:05:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@28 -- # sleep 1 00:23:19.930 INFO: Removing lvol stores 00:23:19.930 09:05:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@30 -- # echo 'INFO: Removing lvol stores' 00:23:19.930 09:05:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:23:20.190 INFO: lvol store lvs0 removed 00:23:20.190 INFO: Removing NVMe 00:23:20.190 09:05:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@32 -- # echo 'INFO: lvol store lvs0 removed' 00:23:20.190 09:05:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@34 -- # echo 'INFO: Removing NVMe' 00:23:20.190 09:05:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:23:20.449 09:05:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@37 -- # return 0 00:23:20.449 09:05:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@86 -- # killprocess 79845 00:23:20.449 09:05:27 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 79845 ']' 00:23:20.449 09:05:27 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # kill -0 79845 00:23:20.449 09:05:27 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@955 -- # uname 00:23:20.449 09:05:27 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:20.449 09:05:27 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79845 00:23:20.449 killing process with pid 79845 00:23:20.450 09:05:27 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:20.450 09:05:27 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:20.450 09:05:27 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79845' 00:23:20.450 09:05:27 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@969 -- # kill 79845 00:23:20.450 09:05:27 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@974 -- # wait 79845 00:23:22.983 09:05:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@87 -- # iscsitestfini 00:23:22.983 09:05:29 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:23:22.983 00:23:22.983 real 0m48.143s 00:23:22.983 user 0m57.249s 00:23:22.983 sys 0m13.303s 00:23:22.983 09:05:29 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:22.983 09:05:29 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.983 ************************************ 00:23:22.983 END TEST iscsi_tgt_multiconnection 00:23:22.983 ************************************ 00:23:22.983 09:05:29 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@46 -- # '[' 1 -eq 1 ']' 00:23:22.983 09:05:29 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@47 -- # run_test iscsi_tgt_ext4test /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test/ext4test.sh 00:23:22.983 09:05:29 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:22.983 09:05:29 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:22.983 09:05:29 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:23:22.983 ************************************ 00:23:22.983 START TEST iscsi_tgt_ext4test 00:23:22.983 ************************************ 00:23:22.983 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test/ext4test.sh 00:23:23.242 * Looking for test storage... 00:23:23.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test 00:23:23.242 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:23:23.242 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:23:23.242 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:23:23.242 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:23:23.242 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:23:23.242 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:23:23.242 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:23:23.242 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:23:23.242 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@24 -- # iscsitestinit 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@28 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@29 -- # node_base=iqn.2013-06.com.intel.ch.spdk 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@31 -- # timing_enter start_iscsi_tgt 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@34 -- # pid=82256 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@33 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@35 -- # echo 'Process pid: 82256' 00:23:23.243 Process pid: 82256 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@37 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@39 -- # waitforlisten 82256 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@831 -- # '[' -z 82256 ']' 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:23.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:23.243 09:05:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:23:23.243 [2024-07-25 09:05:30.254961] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:23.243 [2024-07-25 09:05:30.255090] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82256 ] 00:23:23.503 [2024-07-25 09:05:30.419098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.762 [2024-07-25 09:05:30.668622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.021 09:05:31 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:24.022 09:05:31 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@864 -- # return 0 00:23:24.022 09:05:31 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 4 -b iqn.2013-06.com.intel.ch.spdk 00:23:24.281 09:05:31 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:25.217 09:05:32 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:25.217 09:05:32 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:23:25.786 09:05:32 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 512 4096 --name Malloc0 00:23:26.772 Malloc0 00:23:26.772 iscsi_tgt is listening. Running tests... 00:23:26.772 09:05:33 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@44 -- # echo 'iscsi_tgt is listening. Running tests...' 00:23:26.772 09:05:33 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@46 -- # timing_exit start_iscsi_tgt 00:23:26.772 09:05:33 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:26.772 09:05:33 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:23:26.772 09:05:33 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:23:26.772 09:05:33 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:23:27.031 09:05:33 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_create Malloc0 00:23:27.031 true 00:23:27.317 09:05:34 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target0 Target0_alias EE_Malloc0:0 1:2 64 -d 00:23:27.317 09:05:34 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@55 -- # sleep 1 00:23:28.255 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@57 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:23:28.255 10.0.0.1:3260,1 iqn.2013-06.com.intel.ch.spdk:Target0 00:23:28.255 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@58 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:23:28.514 Logging in to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] 00:23:28.514 Login to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:23:28.514 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@59 -- # waitforiscsidevices 1 00:23:28.514 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@116 -- # local num=1 00:23:28.514 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:23:28.514 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:23:28.514 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:23:28.515 [2024-07-25 09:05:35.398074] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # n=1 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@123 -- # return 0 00:23:28.515 Test error injection 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@61 -- # echo 'Test error injection' 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_inject_error EE_Malloc0 all failure -n 1000 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # iscsiadm -m session -P 3 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # grep 'Attached scsi disk' 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # awk '{print $4}' 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # head -n1 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # dev=sda 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@65 -- # waitforfile /dev/sda 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1265 -- # local i=0 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1276 -- # return 0 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@66 -- # make_filesystem ext4 /dev/sda 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@926 -- # local fstype=ext4 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@927 -- # local dev_name=/dev/sda 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@928 -- # local i=0 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@929 -- # local force 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@932 -- # force=-F 00:23:28.515 09:05:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:28.515 mke2fs 1.46.5 (30-Dec-2021) 00:23:29.034 Discarding device blocks: 0/131072 done 00:23:29.034 Warning: could not erase sector 2: Input/output error 00:23:29.034 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:29.034 Filesystem UUID: 1925f801-d210-46ce-bb05-9c34ca3e6571 00:23:29.034 Superblock backups stored on blocks: 00:23:29.034 32768, 98304 00:23:29.034 00:23:29.034 Allocating group tables: 0/4 done 00:23:29.294 Warning: could not read block 0: Input/output error 00:23:29.294 Warning: could not erase sector 0: Input/output error 00:23:29.294 Writing inode tables: 0/4 done 00:23:29.294 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:29.294 09:05:36 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@938 -- # '[' 0 -ge 15 ']' 00:23:29.294 [2024-07-25 09:05:36.359964] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:29.294 09:05:36 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@941 -- # i=1 00:23:29.294 09:05:36 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@942 -- # sleep 1 00:23:30.672 09:05:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:30.672 mke2fs 1.46.5 (30-Dec-2021) 00:23:30.672 Discarding device blocks: 0/131072 done 00:23:30.672 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:30.672 Filesystem UUID: 61e21203-69b9-4fbf-860d-eb432ed8618d 00:23:30.672 Superblock backups stored on blocks: 00:23:30.672 32768, 98304 00:23:30.672 00:23:30.672 Allocating group tables: 0/4Warning: could not erase sector 2: Input/output error 00:23:30.672  done 00:23:30.932 Warning: could not read block 0: Input/output error 00:23:30.932 Warning: could not erase sector 0: Input/output error 00:23:30.932 Writing inode tables: 0/4 done 00:23:30.932 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:30.932 09:05:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@938 -- # '[' 1 -ge 15 ']' 00:23:30.932 09:05:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@941 -- # i=2 00:23:30.932 09:05:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@942 -- # sleep 1 00:23:30.932 [2024-07-25 09:05:37.956422] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:31.874 09:05:38 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:31.874 mke2fs 1.46.5 (30-Dec-2021) 00:23:32.133 Discarding device blocks: 0/131072 done 00:23:32.394 Warning: could not erase sector 2: Input/output error 00:23:32.394 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:32.394 Filesystem UUID: 9764991f-4600-4a1d-ac2c-1a56ad2d0ab9 00:23:32.394 Superblock backups stored on blocks: 00:23:32.394 32768, 98304 00:23:32.394 00:23:32.394 Allocating group tables: 0/4 done 00:23:32.394 Warning: could not read block 0: Input/output error 00:23:32.394 Warning: could not erase sector 0: Input/output error 00:23:32.394 Writing inode tables: 0/4 done 00:23:32.654 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:32.654 09:05:39 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@938 -- # '[' 2 -ge 15 ']' 00:23:32.654 09:05:39 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@941 -- # i=3 00:23:32.654 09:05:39 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@942 -- # sleep 1 00:23:32.654 [2024-07-25 09:05:39.538929] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:33.591 09:05:40 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:33.591 mke2fs 1.46.5 (30-Dec-2021) 00:23:33.850 Discarding device blocks: 0/131072 done 00:23:33.850 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:33.850 Warning: could not erase sector 2: Input/output error 00:23:33.851 Filesystem UUID: c3e6711c-053a-4ca0-92d2-8f6c19cbff7e 00:23:33.851 Superblock backups stored on blocks: 00:23:33.851 32768, 98304 00:23:33.851 00:23:33.851 Allocating group tables: 0/4 done 00:23:34.110 Warning: could not read block 0: Input/output error 00:23:34.110 Warning: could not erase sector 0: Input/output error 00:23:34.110 Writing inode tables: 0/4 done 00:23:34.110 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:34.110 09:05:41 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@938 -- # '[' 3 -ge 15 ']' 00:23:34.110 09:05:41 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@941 -- # i=4 00:23:34.110 09:05:41 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@942 -- # sleep 1 00:23:34.110 [2024-07-25 09:05:41.224798] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:35.490 09:05:42 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:35.490 mke2fs 1.46.5 (30-Dec-2021) 00:23:35.490 Discarding device blocks: 0/131072 done 00:23:35.490 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:35.490 Filesystem UUID: 5dbe7d28-4044-4a3a-aa67-73f8a801346e 00:23:35.490 Superblock backups stored on blocks: 00:23:35.490 32768, 98304 00:23:35.490 00:23:35.490 Allocating group tables: 0/4 done 00:23:35.490 Warning: could not erase sector 2: Input/output error 00:23:35.748 Warning: could not read block 0: Input/output error 00:23:35.748 Warning: could not erase sector 0: Input/output error 00:23:35.748 Writing inode tables: 0/4 done 00:23:35.748 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:35.748 09:05:42 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@938 -- # '[' 4 -ge 15 ']' 00:23:35.748 09:05:42 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@941 -- # i=5 00:23:35.748 09:05:42 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@942 -- # sleep 1 00:23:35.748 [2024-07-25 09:05:42.807683] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.128 09:05:43 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:37.128 mke2fs 1.46.5 (30-Dec-2021) 00:23:37.128 Discarding device blocks: 0/131072 done 00:23:37.128 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:37.128 Filesystem UUID: 6be65300-9edf-43df-b727-32779024f7c2 00:23:37.128 Superblock backups stored on blocks: 00:23:37.128 32768, 98304 00:23:37.128 00:23:37.128 Allocating group tables: 0/4 done 00:23:37.128 Warning: could not erase sector 2: Input/output error 00:23:37.128 Warning: could not read block 0: Input/output error 00:23:37.387 Warning: could not erase sector 0: Input/output error 00:23:37.387 Writing inode tables: 0/4 done 00:23:37.387 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:37.387 09:05:44 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@938 -- # '[' 5 -ge 15 ']' 00:23:37.387 09:05:44 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@941 -- # i=6 00:23:37.387 09:05:44 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@942 -- # sleep 1 00:23:37.387 [2024-07-25 09:05:44.390223] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:38.326 09:05:45 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:38.326 mke2fs 1.46.5 (30-Dec-2021) 00:23:38.585 Discarding device blocks: 0/131072 done 00:23:38.844 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:38.844 Filesystem UUID: 4af74eec-8c91-48b5-8d1c-c49f5e64a6c3 00:23:38.844 Superblock backups stored on blocks: 00:23:38.844 32768, 98304 00:23:38.844 00:23:38.844 Allocating group tables: 0/4 done 00:23:38.844 Warning: could not erase sector 2: Input/output error 00:23:38.844 Warning: could not read block 0: Input/output error 00:23:38.844 Warning: could not erase sector 0: Input/output error 00:23:38.844 Writing inode tables: 0/4 done 00:23:39.104 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:39.104 09:05:46 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@938 -- # '[' 6 -ge 15 ']' 00:23:39.104 09:05:46 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@941 -- # i=7 00:23:39.104 09:05:46 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@942 -- # sleep 1 00:23:39.104 [2024-07-25 09:05:46.059238] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.039 09:05:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:40.039 mke2fs 1.46.5 (30-Dec-2021) 00:23:40.297 Discarding device blocks: 0/131072 done 00:23:40.297 Warning: could not erase sector 2: Input/output error 00:23:40.297 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:40.297 Filesystem UUID: eef4095f-ac7e-45bb-a6d8-bb5c00f36e9d 00:23:40.297 Superblock backups stored on blocks: 00:23:40.297 32768, 98304 00:23:40.297 00:23:40.297 Allocating group tables: 0/4 done 00:23:40.557 Warning: could not read block 0: Input/output error 00:23:40.557 Warning: could not erase sector 0: Input/output error 00:23:40.557 Writing inode tables: 0/4 done 00:23:40.557 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:40.557 09:05:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@938 -- # '[' 7 -ge 15 ']' 00:23:40.557 09:05:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@941 -- # i=8 00:23:40.557 09:05:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@942 -- # sleep 1 00:23:40.557 [2024-07-25 09:05:47.645044] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:41.967 09:05:48 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:41.967 mke2fs 1.46.5 (30-Dec-2021) 00:23:41.967 Discarding device blocks: 0/131072 done 00:23:41.967 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:41.967 Filesystem UUID: 46548346-7267-436f-bc79-ba50d6310422 00:23:41.967 Superblock backups stored on blocks: 00:23:41.967 32768, 98304 00:23:41.967 00:23:41.967 Allocating group tables: 0/4 done 00:23:41.967 Warning: could not erase sector 2: Input/output error 00:23:41.967 Warning: could not read block 0: Input/output error 00:23:42.226 Warning: could not erase sector 0: Input/output error 00:23:42.226 Writing inode tables: 0/4 done 00:23:42.226 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:42.226 09:05:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@938 -- # '[' 8 -ge 15 ']' 00:23:42.226 [2024-07-25 09:05:49.225409] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:42.226 09:05:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@941 -- # i=9 00:23:42.226 09:05:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@942 -- # sleep 1 00:23:43.161 09:05:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:43.161 mke2fs 1.46.5 (30-Dec-2021) 00:23:43.418 Discarding device blocks: 0/131072 done 00:23:43.418 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:43.418 Filesystem UUID: 55bafb00-c5ef-4439-9319-5afd085eda72 00:23:43.418 Superblock backups stored on blocks: 00:23:43.418 32768, 98304 00:23:43.418 00:23:43.418 Allocating group tables: 0/4 done 00:23:43.418 Writing inode tables: 0/4 done 00:23:43.418 Creating journal (4096 blocks): done 00:23:43.419 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:23:43.419 09:05:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@938 -- # '[' 9 -ge 15 ']' 00:23:43.419 [2024-07-25 09:05:50.520198] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:43.419 09:05:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@941 -- # i=10 00:23:43.419 09:05:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@942 -- # sleep 1 00:23:44.796 09:05:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:44.796 mke2fs 1.46.5 (30-Dec-2021) 00:23:44.796 Discarding device blocks: 0/131072 done 00:23:44.796 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:44.796 Filesystem UUID: c5e320c2-65b6-4302-921b-ef491ee02809 00:23:44.796 Superblock backups stored on blocks: 00:23:44.796 32768, 98304 00:23:44.796 00:23:44.796 Allocating group tables: 0/4 done 00:23:44.796 Writing inode tables: 0/4 done 00:23:44.796 Creating journal (4096 blocks): done 00:23:44.796 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:23:44.796 09:05:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@938 -- # '[' 10 -ge 15 ']' 00:23:44.796 09:05:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@941 -- # i=11 00:23:44.796 [2024-07-25 09:05:51.835407] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:44.796 09:05:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@942 -- # sleep 1 00:23:45.734 09:05:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:45.734 mke2fs 1.46.5 (30-Dec-2021) 00:23:45.993 Discarding device blocks: 0/131072 done 00:23:45.993 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:45.993 Filesystem UUID: 6b7187b0-97b5-47d6-8258-c09296086939 00:23:45.993 Superblock backups stored on blocks: 00:23:45.993 32768, 98304 00:23:45.993 00:23:45.993 Allocating group tables: 0/4 done 00:23:45.993 Writing inode tables: 0/4 done 00:23:45.993 Creating journal (4096 blocks): done 00:23:46.253 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:23:46.253 09:05:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@938 -- # '[' 11 -ge 15 ']' 00:23:46.253 09:05:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@941 -- # i=12 00:23:46.253 [2024-07-25 09:05:53.154482] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:46.253 09:05:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@942 -- # sleep 1 00:23:47.191 09:05:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:47.191 mke2fs 1.46.5 (30-Dec-2021) 00:23:47.451 Discarding device blocks: 0/131072 done 00:23:47.451 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:47.451 Filesystem UUID: 6ad8349c-d5de-447d-9348-5863829e2113 00:23:47.451 Superblock backups stored on blocks: 00:23:47.451 32768, 98304 00:23:47.451 00:23:47.451 Allocating group tables: 0/4 done 00:23:47.451 Writing inode tables: 0/4 done 00:23:47.451 Creating journal (4096 blocks): done 00:23:47.451 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:23:47.451 09:05:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@938 -- # '[' 12 -ge 15 ']' 00:23:47.451 [2024-07-25 09:05:54.463983] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:47.451 09:05:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@941 -- # i=13 00:23:47.451 09:05:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@942 -- # sleep 1 00:23:48.390 09:05:55 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:48.390 mke2fs 1.46.5 (30-Dec-2021) 00:23:48.650 Discarding device blocks: 0/131072 done 00:23:48.650 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:48.650 Filesystem UUID: c3bcdf2e-dd3b-465e-ba91-6a7026162bcd 00:23:48.650 Superblock backups stored on blocks: 00:23:48.650 32768, 98304 00:23:48.650 00:23:48.650 Allocating group tables: 0/4 done 00:23:48.650 Writing inode tables: 0/4 done 00:23:48.650 Creating journal (4096 blocks): done 00:23:48.650 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:23:48.650 09:05:55 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@938 -- # '[' 13 -ge 15 ']' 00:23:48.650 09:05:55 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@941 -- # i=14 00:23:48.650 09:05:55 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@942 -- # sleep 1 00:23:48.650 [2024-07-25 09:05:55.759210] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:50.029 09:05:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:50.029 mke2fs 1.46.5 (30-Dec-2021) 00:23:50.029 Discarding device blocks: 0/131072 done 00:23:50.029 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:50.029 Filesystem UUID: 12e5affb-c503-4aad-a8cf-137eb04fa8e5 00:23:50.029 Superblock backups stored on blocks: 00:23:50.029 32768, 98304 00:23:50.029 00:23:50.029 Allocating group tables: 0/4 done 00:23:50.029 Writing inode tables: 0/4 done 00:23:50.029 Creating journal (4096 blocks): done 00:23:50.029 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:23:50.029 [2024-07-25 09:05:57.071153] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:50.029 09:05:57 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@938 -- # '[' 14 -ge 15 ']' 00:23:50.029 09:05:57 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@941 -- # i=15 00:23:50.029 09:05:57 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@942 -- # sleep 1 00:23:50.968 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:50.968 mke2fs 1.46.5 (30-Dec-2021) 00:23:51.228 Discarding device blocks: 0/131072 done 00:23:51.228 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:51.228 Filesystem UUID: e98c4213-346d-4e49-8658-3922206930f1 00:23:51.228 Superblock backups stored on blocks: 00:23:51.228 32768, 98304 00:23:51.228 00:23:51.228 Allocating group tables: 0/4 done 00:23:51.228 Writing inode tables: 0/4 done 00:23:51.228 Creating journal (4096 blocks): done 00:23:51.487 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:23:51.487 [2024-07-25 09:05:58.385160] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:51.487 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@938 -- # '[' 15 -ge 15 ']' 00:23:51.487 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # return 1 00:23:51.487 mkfs failed as expected 00:23:51.487 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@70 -- # echo 'mkfs failed as expected' 00:23:51.487 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@73 -- # iscsicleanup 00:23:51.487 Cleaning up iSCSI connection 00:23:51.487 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:23:51.487 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:23:51.487 Logging out of session [sid: 71, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] 00:23:51.487 Logout of [sid: 71, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:23:51.487 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:23:51.487 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@985 -- # rm -rf 00:23:51.487 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_inject_error EE_Malloc0 clear failure 00:23:51.746 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2013-06.com.intel.ch.spdk:Target0 00:23:51.746 Error injection test done 00:23:51.746 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@76 -- # echo 'Error injection test done' 00:23:51.746 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@78 -- # get_bdev_size Nvme0n1 00:23:51.746 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1378 -- # local bdev_name=Nvme0n1 00:23:51.746 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:51.746 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1380 -- # local bs 00:23:51.747 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1381 -- # local nb 00:23:51.747 09:05:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 00:23:52.006 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:52.006 { 00:23:52.006 "name": "Nvme0n1", 00:23:52.006 "aliases": [ 00:23:52.006 "f732e6fc-469d-4499-a167-2f7d190a61f9" 00:23:52.006 ], 00:23:52.006 "product_name": "NVMe disk", 00:23:52.006 "block_size": 4096, 00:23:52.006 "num_blocks": 1310720, 00:23:52.006 "uuid": "f732e6fc-469d-4499-a167-2f7d190a61f9", 00:23:52.006 "assigned_rate_limits": { 00:23:52.006 "rw_ios_per_sec": 0, 00:23:52.006 "rw_mbytes_per_sec": 0, 00:23:52.006 "r_mbytes_per_sec": 0, 00:23:52.006 "w_mbytes_per_sec": 0 00:23:52.006 }, 00:23:52.006 "claimed": false, 00:23:52.006 "zoned": false, 00:23:52.006 "supported_io_types": { 00:23:52.006 "read": true, 00:23:52.006 "write": true, 00:23:52.006 "unmap": true, 00:23:52.006 "flush": true, 00:23:52.006 "reset": true, 00:23:52.006 "nvme_admin": true, 00:23:52.006 "nvme_io": true, 00:23:52.006 "nvme_io_md": false, 00:23:52.006 "write_zeroes": true, 00:23:52.006 "zcopy": false, 00:23:52.006 "get_zone_info": false, 00:23:52.006 "zone_management": false, 00:23:52.006 "zone_append": false, 00:23:52.006 "compare": true, 00:23:52.006 "compare_and_write": false, 00:23:52.006 "abort": true, 00:23:52.006 "seek_hole": false, 00:23:52.006 "seek_data": false, 00:23:52.006 "copy": true, 00:23:52.006 "nvme_iov_md": false 00:23:52.006 }, 00:23:52.006 "driver_specific": { 00:23:52.006 "nvme": [ 00:23:52.006 { 00:23:52.006 "pci_address": "0000:00:10.0", 00:23:52.006 "trid": { 00:23:52.006 "trtype": "PCIe", 00:23:52.006 "traddr": "0000:00:10.0" 00:23:52.006 }, 00:23:52.006 "ctrlr_data": { 00:23:52.006 "cntlid": 0, 00:23:52.006 "vendor_id": "0x1b36", 00:23:52.006 "model_number": "QEMU NVMe Ctrl", 00:23:52.006 "serial_number": "12340", 00:23:52.006 "firmware_revision": "8.0.0", 00:23:52.006 "subnqn": "nqn.2019-08.org.qemu:12340", 00:23:52.006 "oacs": { 00:23:52.006 "security": 0, 00:23:52.006 "format": 1, 00:23:52.006 "firmware": 0, 00:23:52.006 "ns_manage": 1 00:23:52.006 }, 00:23:52.006 "multi_ctrlr": false, 00:23:52.006 "ana_reporting": false 00:23:52.006 }, 00:23:52.006 "vs": { 00:23:52.006 "nvme_version": "1.4" 00:23:52.006 }, 00:23:52.006 "ns_data": { 00:23:52.006 "id": 1, 00:23:52.006 "can_share": false 00:23:52.006 } 00:23:52.006 } 00:23:52.006 ], 00:23:52.006 "mp_policy": "active_passive" 00:23:52.006 } 00:23:52.006 } 00:23:52.006 ]' 00:23:52.006 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:52.006 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1383 -- # bs=4096 00:23:52.006 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:52.266 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:52.266 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:52.266 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1388 -- # echo 5120 00:23:52.266 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@78 -- # bdev_size=5120 00:23:52.266 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@79 -- # split_size=2560 00:23:52.266 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@80 -- # split_size=2560 00:23:52.266 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create Nvme0n1 2 -s 2560 00:23:52.266 Nvme0n1p0 Nvme0n1p1 00:23:52.266 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias Nvme0n1p0:0 1:2 64 -d 00:23:52.526 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@84 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:23:52.526 10.0.0.1:3260,1 iqn.2013-06.com.intel.ch.spdk:Target1 00:23:52.526 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@85 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:23:52.526 Logging in to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] 00:23:52.526 Login to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@86 -- # waitforiscsidevices 1 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@116 -- # local num=1 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:23:52.785 [2024-07-25 09:05:59.649897] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # n=1 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@123 -- # return 0 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # iscsiadm -m session -P 3 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # grep 'Attached scsi disk' 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # awk '{print $4}' 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # head -n1 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # dev=sda 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@89 -- # waitforfile /dev/sda 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1265 -- # local i=0 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1276 -- # return 0 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@91 -- # make_filesystem ext4 /dev/sda 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@926 -- # local fstype=ext4 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@927 -- # local dev_name=/dev/sda 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@928 -- # local i=0 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@929 -- # local force 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@932 -- # force=-F 00:23:52.785 09:05:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda 00:23:52.785 mke2fs 1.46.5 (30-Dec-2021) 00:23:52.785 Discarding device blocks: 0/655360 done 00:23:52.785 Creating filesystem with 655360 4k blocks and 163840 inodes 00:23:52.785 Filesystem UUID: ce2165cc-6e67-4e09-9f80-931e7a70a41c 00:23:52.785 Superblock backups stored on blocks: 00:23:52.785 32768, 98304, 163840, 229376, 294912 00:23:52.785 00:23:52.785 Allocating group tables: 0/20 done 00:23:52.785 Writing inode tables: 0/20 done 00:23:53.044 Creating journal (16384 blocks): done 00:23:53.044 Writing superblocks and filesystem accounting information: 0/20 done 00:23:53.044 00:23:53.044 09:06:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@945 -- # return 0 00:23:53.044 09:06:00 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@92 -- # mkdir -p /mnt/sdadir 00:23:53.044 09:06:00 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@93 -- # mount -o sync /dev/sda /mnt/sdadir 00:23:53.044 09:06:00 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@95 -- # rsync -qav --exclude=.git '--exclude=*.o' /home/vagrant/spdk_repo/spdk/ /mnt/sdadir/spdk 00:25:14.518 09:07:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@97 -- # make -C /mnt/sdadir/spdk clean 00:25:14.518 make: Entering directory '/mnt/sdadir/spdk' 00:26:22.272 make[1]: Nothing to be done for 'clean'. 00:26:22.272 make: Leaving directory '/mnt/sdadir/spdk' 00:26:22.272 09:08:21 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@98 -- # cd /mnt/sdadir/spdk 00:26:22.272 09:08:21 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@98 -- # ./configure --disable-unit-tests --disable-tests 00:26:22.272 Using default SPDK env in /mnt/sdadir/spdk/lib/env_dpdk 00:26:22.272 Using default DPDK in /mnt/sdadir/spdk/dpdk/build 00:26:37.170 Configuring ISA-L (logfile: /mnt/sdadir/spdk/.spdk-isal.log)...done. 00:26:59.226 Configuring ISA-L-crypto (logfile: /mnt/sdadir/spdk/.spdk-isal-crypto.log)...done. 00:27:00.600 Creating mk/config.mk...done. 00:27:00.600 Creating mk/cc.flags.mk...done. 00:27:00.600 Type 'make' to build. 00:27:00.600 09:09:07 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@99 -- # make -C /mnt/sdadir/spdk -j 00:27:00.600 make: Entering directory '/mnt/sdadir/spdk' 00:27:00.858 make[1]: Nothing to be done for 'all'. 00:27:27.403 The Meson build system 00:27:27.403 Version: 1.3.1 00:27:27.403 Source dir: /mnt/sdadir/spdk/dpdk 00:27:27.403 Build dir: /mnt/sdadir/spdk/dpdk/build-tmp 00:27:27.403 Build type: native build 00:27:27.403 Program cat found: YES (/usr/bin/cat) 00:27:27.403 Project name: DPDK 00:27:27.403 Project version: 24.03.0 00:27:27.403 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:27:27.403 C linker for the host machine: cc ld.bfd 2.39-16 00:27:27.403 Host machine cpu family: x86_64 00:27:27.403 Host machine cpu: x86_64 00:27:27.403 Program pkg-config found: YES (/usr/bin/pkg-config) 00:27:27.403 Program check-symbols.sh found: YES (/mnt/sdadir/spdk/dpdk/buildtools/check-symbols.sh) 00:27:27.403 Program options-ibverbs-static.sh found: YES (/mnt/sdadir/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:27:27.403 Program python3 found: YES (/usr/bin/python3) 00:27:27.403 Program cat found: YES (/usr/bin/cat) 00:27:27.403 Compiler for C supports arguments -march=native: YES 00:27:27.403 Checking for size of "void *" : 8 00:27:27.403 Checking for size of "void *" : 8 (cached) 00:27:27.403 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:27:27.403 Library m found: YES 00:27:27.403 Library numa found: YES 00:27:27.403 Has header "numaif.h" : YES 00:27:27.403 Library fdt found: NO 00:27:27.403 Library execinfo found: NO 00:27:27.403 Has header "execinfo.h" : YES 00:27:27.403 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:27:27.403 Run-time dependency libarchive found: NO (tried pkgconfig) 00:27:27.403 Run-time dependency libbsd found: NO (tried pkgconfig) 00:27:27.403 Run-time dependency jansson found: NO (tried pkgconfig) 00:27:27.403 Run-time dependency openssl found: YES 3.0.9 00:27:27.403 Run-time dependency libpcap found: YES 1.10.4 00:27:27.403 Has header "pcap.h" with dependency libpcap: YES 00:27:27.403 Compiler for C supports arguments -Wcast-qual: YES 00:27:27.403 Compiler for C supports arguments -Wdeprecated: YES 00:27:27.403 Compiler for C supports arguments -Wformat: YES 00:27:27.403 Compiler for C supports arguments -Wformat-nonliteral: YES 00:27:27.403 Compiler for C supports arguments -Wformat-security: YES 00:27:27.403 Compiler for C supports arguments -Wmissing-declarations: YES 00:27:27.403 Compiler for C supports arguments -Wmissing-prototypes: YES 00:27:27.403 Compiler for C supports arguments -Wnested-externs: YES 00:27:27.403 Compiler for C supports arguments -Wold-style-definition: YES 00:27:27.403 Compiler for C supports arguments -Wpointer-arith: YES 00:27:27.403 Compiler for C supports arguments -Wsign-compare: YES 00:27:27.403 Compiler for C supports arguments -Wstrict-prototypes: YES 00:27:27.403 Compiler for C supports arguments -Wundef: YES 00:27:27.403 Compiler for C supports arguments -Wwrite-strings: YES 00:27:27.403 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:27:27.403 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:27:27.403 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:27:27.403 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:27:27.403 Program objdump found: YES (/usr/bin/objdump) 00:27:27.403 Compiler for C supports arguments -mavx512f: YES 00:27:27.403 Checking if "AVX512 checking" compiles: YES 00:27:27.403 Fetching value of define "__SSE4_2__" : 1 00:27:27.403 Fetching value of define "__AES__" : 1 00:27:27.403 Fetching value of define "__AVX__" : 1 00:27:27.403 Fetching value of define "__AVX2__" : 1 00:27:27.403 Fetching value of define "__AVX512BW__" : 1 00:27:27.403 Fetching value of define "__AVX512CD__" : 1 00:27:27.403 Fetching value of define "__AVX512DQ__" : 1 00:27:27.403 Fetching value of define "__AVX512F__" : 1 00:27:27.403 Fetching value of define "__AVX512VL__" : 1 00:27:27.403 Fetching value of define "__PCLMUL__" : 1 00:27:27.403 Fetching value of define "__RDRND__" : 1 00:27:27.403 Fetching value of define "__RDSEED__" : 1 00:27:27.403 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:27:27.403 Fetching value of define "__znver1__" : (undefined) 00:27:27.403 Fetching value of define "__znver2__" : (undefined) 00:27:27.403 Fetching value of define "__znver3__" : (undefined) 00:27:27.403 Fetching value of define "__znver4__" : (undefined) 00:27:27.403 Compiler for C supports arguments -Wno-format-truncation: YES 00:27:27.403 Checking for function "getentropy" : NO 00:27:27.403 Fetching value of define "__PCLMUL__" : 1 (cached) 00:27:27.403 Fetching value of define "__AVX512F__" : 1 (cached) 00:27:27.403 Fetching value of define "__AVX512BW__" : 1 (cached) 00:27:27.403 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:27:27.403 Fetching value of define "__AVX512VL__" : 1 (cached) 00:27:27.403 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:27:27.403 Compiler for C supports arguments -mpclmul: YES 00:27:27.403 Compiler for C supports arguments -maes: YES 00:27:27.403 Compiler for C supports arguments -mavx512f: YES (cached) 00:27:27.403 Compiler for C supports arguments -mavx512bw: YES 00:27:27.403 Compiler for C supports arguments -mavx512dq: YES 00:27:27.403 Compiler for C supports arguments -mavx512vl: YES 00:27:27.403 Compiler for C supports arguments -mvpclmulqdq: YES 00:27:27.403 Compiler for C supports arguments -mavx2: YES 00:27:27.403 Compiler for C supports arguments -mavx: YES 00:27:27.403 Compiler for C supports arguments -Wno-cast-qual: YES 00:27:27.403 Has header "linux/userfaultfd.h" : YES 00:27:27.403 Has header "linux/vduse.h" : YES 00:27:27.403 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:27:27.403 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:27:27.403 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:27:27.403 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:27:27.403 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:27:27.403 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:27:27.403 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:27:27.403 Program doxygen found: YES (/usr/bin/doxygen) 00:27:27.403 Configuring doxy-api-html.conf using configuration 00:27:27.403 Configuring doxy-api-man.conf using configuration 00:27:27.403 Program mandb found: YES (/usr/bin/mandb) 00:27:27.403 Program sphinx-build found: NO 00:27:27.403 Configuring rte_build_config.h using configuration 00:27:27.403 Message: 00:27:27.403 ================= 00:27:27.403 Applications Enabled 00:27:27.403 ================= 00:27:27.403 00:27:27.403 apps: 00:27:27.403 00:27:27.403 00:27:27.403 Message: 00:27:27.403 ================= 00:27:27.403 Libraries Enabled 00:27:27.403 ================= 00:27:27.403 00:27:27.403 libs: 00:27:27.403 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:27:27.403 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:27:27.403 cryptodev, dmadev, power, reorder, security, vhost, 00:27:27.403 00:27:27.403 Message: 00:27:27.403 =============== 00:27:27.403 Drivers Enabled 00:27:27.403 =============== 00:27:27.403 00:27:27.403 common: 00:27:27.403 00:27:27.403 bus: 00:27:27.403 pci, vdev, 00:27:27.403 mempool: 00:27:27.403 ring, 00:27:27.403 dma: 00:27:27.403 00:27:27.403 net: 00:27:27.403 00:27:27.403 crypto: 00:27:27.403 00:27:27.403 compress: 00:27:27.403 00:27:27.403 vdpa: 00:27:27.403 00:27:27.403 00:27:27.403 Message: 00:27:27.403 ================= 00:27:27.403 Content Skipped 00:27:27.403 ================= 00:27:27.403 00:27:27.403 apps: 00:27:27.403 dumpcap: explicitly disabled via build config 00:27:27.403 graph: explicitly disabled via build config 00:27:27.403 pdump: explicitly disabled via build config 00:27:27.403 proc-info: explicitly disabled via build config 00:27:27.403 test-acl: explicitly disabled via build config 00:27:27.403 test-bbdev: explicitly disabled via build config 00:27:27.403 test-cmdline: explicitly disabled via build config 00:27:27.403 test-compress-perf: explicitly disabled via build config 00:27:27.403 test-crypto-perf: explicitly disabled via build config 00:27:27.403 test-dma-perf: explicitly disabled via build config 00:27:27.403 test-eventdev: explicitly disabled via build config 00:27:27.403 test-fib: explicitly disabled via build config 00:27:27.403 test-flow-perf: explicitly disabled via build config 00:27:27.403 test-gpudev: explicitly disabled via build config 00:27:27.403 test-mldev: explicitly disabled via build config 00:27:27.403 test-pipeline: explicitly disabled via build config 00:27:27.403 test-pmd: explicitly disabled via build config 00:27:27.404 test-regex: explicitly disabled via build config 00:27:27.404 test-sad: explicitly disabled via build config 00:27:27.404 test-security-perf: explicitly disabled via build config 00:27:27.404 00:27:27.404 libs: 00:27:27.404 argparse: explicitly disabled via build config 00:27:27.404 metrics: explicitly disabled via build config 00:27:27.404 acl: explicitly disabled via build config 00:27:27.404 bbdev: explicitly disabled via build config 00:27:27.404 bitratestats: explicitly disabled via build config 00:27:27.404 bpf: explicitly disabled via build config 00:27:27.404 cfgfile: explicitly disabled via build config 00:27:27.404 distributor: explicitly disabled via build config 00:27:27.404 efd: explicitly disabled via build config 00:27:27.404 eventdev: explicitly disabled via build config 00:27:27.404 dispatcher: explicitly disabled via build config 00:27:27.404 gpudev: explicitly disabled via build config 00:27:27.404 gro: explicitly disabled via build config 00:27:27.404 gso: explicitly disabled via build config 00:27:27.404 ip_frag: explicitly disabled via build config 00:27:27.404 jobstats: explicitly disabled via build config 00:27:27.404 latencystats: explicitly disabled via build config 00:27:27.404 lpm: explicitly disabled via build config 00:27:27.404 member: explicitly disabled via build config 00:27:27.404 pcapng: explicitly disabled via build config 00:27:27.404 rawdev: explicitly disabled via build config 00:27:27.404 regexdev: explicitly disabled via build config 00:27:27.404 mldev: explicitly disabled via build config 00:27:27.404 rib: explicitly disabled via build config 00:27:27.404 sched: explicitly disabled via build config 00:27:27.404 stack: explicitly disabled via build config 00:27:27.404 ipsec: explicitly disabled via build config 00:27:27.404 pdcp: explicitly disabled via build config 00:27:27.404 fib: explicitly disabled via build config 00:27:27.404 port: explicitly disabled via build config 00:27:27.404 pdump: explicitly disabled via build config 00:27:27.404 table: explicitly disabled via build config 00:27:27.404 pipeline: explicitly disabled via build config 00:27:27.404 graph: explicitly disabled via build config 00:27:27.404 node: explicitly disabled via build config 00:27:27.404 00:27:27.404 drivers: 00:27:27.404 common/cpt: not in enabled drivers build config 00:27:27.404 common/dpaax: not in enabled drivers build config 00:27:27.404 common/iavf: not in enabled drivers build config 00:27:27.404 common/idpf: not in enabled drivers build config 00:27:27.404 common/ionic: not in enabled drivers build config 00:27:27.404 common/mvep: not in enabled drivers build config 00:27:27.404 common/octeontx: not in enabled drivers build config 00:27:27.404 bus/auxiliary: not in enabled drivers build config 00:27:27.404 bus/cdx: not in enabled drivers build config 00:27:27.404 bus/dpaa: not in enabled drivers build config 00:27:27.404 bus/fslmc: not in enabled drivers build config 00:27:27.404 bus/ifpga: not in enabled drivers build config 00:27:27.404 bus/platform: not in enabled drivers build config 00:27:27.404 bus/uacce: not in enabled drivers build config 00:27:27.404 bus/vmbus: not in enabled drivers build config 00:27:27.404 common/cnxk: not in enabled drivers build config 00:27:27.404 common/mlx5: not in enabled drivers build config 00:27:27.404 common/nfp: not in enabled drivers build config 00:27:27.404 common/nitrox: not in enabled drivers build config 00:27:27.404 common/qat: not in enabled drivers build config 00:27:27.404 common/sfc_efx: not in enabled drivers build config 00:27:27.404 mempool/bucket: not in enabled drivers build config 00:27:27.404 mempool/cnxk: not in enabled drivers build config 00:27:27.404 mempool/dpaa: not in enabled drivers build config 00:27:27.404 mempool/dpaa2: not in enabled drivers build config 00:27:27.404 mempool/octeontx: not in enabled drivers build config 00:27:27.404 mempool/stack: not in enabled drivers build config 00:27:27.404 dma/cnxk: not in enabled drivers build config 00:27:27.404 dma/dpaa: not in enabled drivers build config 00:27:27.404 dma/dpaa2: not in enabled drivers build config 00:27:27.404 dma/hisilicon: not in enabled drivers build config 00:27:27.404 dma/idxd: not in enabled drivers build config 00:27:27.404 dma/ioat: not in enabled drivers build config 00:27:27.404 dma/skeleton: not in enabled drivers build config 00:27:27.404 net/af_packet: not in enabled drivers build config 00:27:27.404 net/af_xdp: not in enabled drivers build config 00:27:27.404 net/ark: not in enabled drivers build config 00:27:27.404 net/atlantic: not in enabled drivers build config 00:27:27.404 net/avp: not in enabled drivers build config 00:27:27.404 net/axgbe: not in enabled drivers build config 00:27:27.404 net/bnx2x: not in enabled drivers build config 00:27:27.404 net/bnxt: not in enabled drivers build config 00:27:27.404 net/bonding: not in enabled drivers build config 00:27:27.404 net/cnxk: not in enabled drivers build config 00:27:27.404 net/cpfl: not in enabled drivers build config 00:27:27.404 net/cxgbe: not in enabled drivers build config 00:27:27.404 net/dpaa: not in enabled drivers build config 00:27:27.404 net/dpaa2: not in enabled drivers build config 00:27:27.404 net/e1000: not in enabled drivers build config 00:27:27.404 net/ena: not in enabled drivers build config 00:27:27.404 net/enetc: not in enabled drivers build config 00:27:27.404 net/enetfec: not in enabled drivers build config 00:27:27.404 net/enic: not in enabled drivers build config 00:27:27.404 net/failsafe: not in enabled drivers build config 00:27:27.404 net/fm10k: not in enabled drivers build config 00:27:27.404 net/gve: not in enabled drivers build config 00:27:27.404 net/hinic: not in enabled drivers build config 00:27:27.404 net/hns3: not in enabled drivers build config 00:27:27.404 net/i40e: not in enabled drivers build config 00:27:27.404 net/iavf: not in enabled drivers build config 00:27:27.404 net/ice: not in enabled drivers build config 00:27:27.404 net/idpf: not in enabled drivers build config 00:27:27.404 net/igc: not in enabled drivers build config 00:27:27.404 net/ionic: not in enabled drivers build config 00:27:27.404 net/ipn3ke: not in enabled drivers build config 00:27:27.404 net/ixgbe: not in enabled drivers build config 00:27:27.404 net/mana: not in enabled drivers build config 00:27:27.404 net/memif: not in enabled drivers build config 00:27:27.404 net/mlx4: not in enabled drivers build config 00:27:27.404 net/mlx5: not in enabled drivers build config 00:27:27.404 net/mvneta: not in enabled drivers build config 00:27:27.404 net/mvpp2: not in enabled drivers build config 00:27:27.404 net/netvsc: not in enabled drivers build config 00:27:27.404 net/nfb: not in enabled drivers build config 00:27:27.404 net/nfp: not in enabled drivers build config 00:27:27.404 net/ngbe: not in enabled drivers build config 00:27:27.404 net/null: not in enabled drivers build config 00:27:27.404 net/octeontx: not in enabled drivers build config 00:27:27.404 net/octeon_ep: not in enabled drivers build config 00:27:27.404 net/pcap: not in enabled drivers build config 00:27:27.404 net/pfe: not in enabled drivers build config 00:27:27.404 net/qede: not in enabled drivers build config 00:27:27.404 net/ring: not in enabled drivers build config 00:27:27.404 net/sfc: not in enabled drivers build config 00:27:27.404 net/softnic: not in enabled drivers build config 00:27:27.404 net/tap: not in enabled drivers build config 00:27:27.404 net/thunderx: not in enabled drivers build config 00:27:27.404 net/txgbe: not in enabled drivers build config 00:27:27.404 net/vdev_netvsc: not in enabled drivers build config 00:27:27.404 net/vhost: not in enabled drivers build config 00:27:27.404 net/virtio: not in enabled drivers build config 00:27:27.404 net/vmxnet3: not in enabled drivers build config 00:27:27.404 raw/*: missing internal dependency, "rawdev" 00:27:27.404 crypto/armv8: not in enabled drivers build config 00:27:27.404 crypto/bcmfs: not in enabled drivers build config 00:27:27.404 crypto/caam_jr: not in enabled drivers build config 00:27:27.404 crypto/ccp: not in enabled drivers build config 00:27:27.404 crypto/cnxk: not in enabled drivers build config 00:27:27.404 crypto/dpaa_sec: not in enabled drivers build config 00:27:27.404 crypto/dpaa2_sec: not in enabled drivers build config 00:27:27.404 crypto/ipsec_mb: not in enabled drivers build config 00:27:27.404 crypto/mlx5: not in enabled drivers build config 00:27:27.404 crypto/mvsam: not in enabled drivers build config 00:27:27.404 crypto/nitrox: not in enabled drivers build config 00:27:27.404 crypto/null: not in enabled drivers build config 00:27:27.404 crypto/octeontx: not in enabled drivers build config 00:27:27.404 crypto/openssl: not in enabled drivers build config 00:27:27.404 crypto/scheduler: not in enabled drivers build config 00:27:27.404 crypto/uadk: not in enabled drivers build config 00:27:27.404 crypto/virtio: not in enabled drivers build config 00:27:27.404 compress/isal: not in enabled drivers build config 00:27:27.404 compress/mlx5: not in enabled drivers build config 00:27:27.404 compress/nitrox: not in enabled drivers build config 00:27:27.404 compress/octeontx: not in enabled drivers build config 00:27:27.404 compress/zlib: not in enabled drivers build config 00:27:27.404 regex/*: missing internal dependency, "regexdev" 00:27:27.404 ml/*: missing internal dependency, "mldev" 00:27:27.404 vdpa/ifc: not in enabled drivers build config 00:27:27.404 vdpa/mlx5: not in enabled drivers build config 00:27:27.404 vdpa/nfp: not in enabled drivers build config 00:27:27.404 vdpa/sfc: not in enabled drivers build config 00:27:27.404 event/*: missing internal dependency, "eventdev" 00:27:27.404 baseband/*: missing internal dependency, "bbdev" 00:27:27.404 gpu/*: missing internal dependency, "gpudev" 00:27:27.404 00:27:27.404 00:27:27.404 Build targets in project: 61 00:27:27.404 00:27:27.404 DPDK 24.03.0 00:27:27.404 00:27:27.404 User defined options 00:27:27.404 default_library : static 00:27:27.404 libdir : lib 00:27:27.404 prefix : /mnt/sdadir/spdk/dpdk/build 00:27:27.404 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Wno-error 00:27:27.404 c_link_args : 00:27:27.404 cpu_instruction_set: native 00:27:27.404 disable_apps : test-dma-perf,proc-info,dumpcap,test-compress-perf,test-flow-perf,test,test-gpudev,test-fib,test-crypto-perf,test-acl,graph,test-mldev,test-regex,test-eventdev,test-bbdev,test-pipeline,test-cmdline,test-pmd,test-security-perf,pdump,test-sad 00:27:27.405 disable_libs : node,acl,lpm,regexdev,dispatcher,pcapng,gpudev,ipsec,ip_frag,cfgfile,member,graph,sched,bbdev,gso,fib,bitratestats,latencystats,distributor,efd,jobstats,pipeline,argparse,port,gro,metrics,stack,rawdev,bpf,rib,mldev,table,eventdev,pdump,pdcp 00:27:27.405 enable_docs : false 00:27:27.405 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:27:27.405 enable_kmods : false 00:27:27.405 max_lcores : 128 00:27:27.405 tests : false 00:27:27.405 00:27:27.405 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:27:27.405 ninja: Entering directory `/mnt/sdadir/spdk/dpdk/build-tmp' 00:27:27.405 [1/244] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:27:27.405 [2/244] Compiling C object lib/librte_log.a.p/log_log.c.o 00:27:27.405 [3/244] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:27:27.405 [4/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:27:27.405 [5/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:27:27.405 [6/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:27:27.405 [7/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:27:27.405 [8/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:27:27.405 [9/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:27:27.405 [10/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:27:27.405 [11/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:27:27.405 [12/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:27:27.405 [13/244] Linking static target lib/librte_log.a 00:27:27.405 [14/244] Linking target lib/librte_log.so.24.1 00:27:27.405 [15/244] Linking static target lib/librte_kvargs.a 00:27:27.405 [16/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:27:27.405 [17/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:27:27.405 [18/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:27:27.405 [19/244] Linking static target lib/librte_telemetry.a 00:27:27.405 [20/244] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:27:27.405 [21/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:27:27.405 [22/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:27:27.405 [23/244] Linking target lib/librte_kvargs.so.24.1 00:27:27.405 [24/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:27:27.405 [25/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:27:27.405 [26/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:27:27.405 [27/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:27:27.405 [28/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:27:27.405 [29/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:27:27.405 [30/244] Linking target lib/librte_telemetry.so.24.1 00:27:27.405 [31/244] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:27:27.405 [32/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:27:27.405 [33/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:27:27.405 [34/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:27:27.405 [35/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:27:27.405 [36/244] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:27:27.405 [37/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:27:27.405 [38/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:27:27.405 [39/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:27:27.405 [40/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:27:27.405 [41/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:27:27.405 [42/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:27:27.405 [43/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:27:27.405 [44/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:27:27.405 [45/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:27:27.405 [46/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:27:27.405 [47/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:27:27.663 [48/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:27:27.663 [49/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:27:27.663 [50/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:27:27.922 [51/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:27:27.922 [52/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:27:27.922 [53/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:27:27.922 [54/244] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:27:27.922 [55/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:27:28.179 [56/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:27:28.179 [57/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:27:28.179 [58/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:27:28.179 [59/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:27:28.179 [60/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:27:28.179 [61/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:27:28.180 [62/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:27:28.180 [63/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:27:28.437 [64/244] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:27:28.437 [65/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:27:28.437 [66/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:27:28.694 [67/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:27:28.694 [68/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:27:28.952 [69/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:27:28.952 [70/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:27:28.952 [71/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:27:29.210 [72/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:27:29.210 [73/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:27:29.210 [74/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:27:29.210 [75/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:27:29.210 [76/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:27:29.210 [77/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:27:29.210 [78/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:27:29.210 [79/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:27:29.210 [80/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:27:29.772 [81/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:27:29.772 [82/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:27:29.772 [83/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:27:29.772 [84/244] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:27:29.772 [85/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:27:30.028 [86/244] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:27:30.028 [87/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:27:30.593 [88/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:27:30.593 [89/244] Linking static target lib/librte_ring.a 00:27:30.593 [90/244] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:27:30.593 [91/244] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:27:30.593 [92/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:27:30.593 [93/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:27:30.593 [94/244] Linking static target lib/librte_eal.a 00:27:30.593 [95/244] Linking target lib/librte_eal.so.24.1 00:27:30.593 [96/244] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:27:30.593 [97/244] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:27:30.593 [98/244] Linking static target lib/net/libnet_crc_avx512_lib.a 00:27:30.593 [99/244] Linking static target lib/librte_rcu.a 00:27:30.593 [100/244] Linking static target lib/librte_mempool.a 00:27:30.851 [101/244] Linking static target lib/librte_mbuf.a 00:27:30.851 [102/244] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:27:30.851 [103/244] Linking target lib/librte_ring.so.24.1 00:27:30.851 [104/244] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:27:31.120 [105/244] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:27:31.120 [106/244] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:27:31.120 [107/244] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:27:31.391 [108/244] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:27:31.391 [109/244] Linking static target lib/librte_meter.a 00:27:31.391 [110/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:27:31.391 [111/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:27:31.391 [112/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:27:31.391 [113/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:27:31.649 [114/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:27:31.649 [115/244] Linking target lib/librte_meter.so.24.1 00:27:31.649 [116/244] Linking static target lib/librte_net.a 00:27:31.649 [117/244] Linking target lib/librte_rcu.so.24.1 00:27:31.649 [118/244] Linking target lib/librte_mempool.so.24.1 00:27:31.649 [119/244] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:27:31.649 [120/244] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:27:31.649 [121/244] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:27:31.908 [122/244] Linking target lib/librte_mbuf.so.24.1 00:27:31.908 [123/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:27:31.908 [124/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:27:31.908 [125/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:27:32.166 [126/244] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:27:32.166 [127/244] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:27:32.166 [128/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:27:32.166 [129/244] Linking target lib/librte_net.so.24.1 00:27:32.166 [130/244] Linking static target lib/librte_pci.a 00:27:32.166 [131/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:27:32.166 [132/244] Linking target lib/librte_pci.so.24.1 00:27:32.425 [133/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:27:32.425 [134/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:27:32.425 [135/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:27:32.425 [136/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:27:32.425 [137/244] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:27:32.425 [138/244] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:27:32.425 [139/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:27:32.425 [140/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:27:32.425 [141/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:27:32.683 [142/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:27:32.683 [143/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:27:32.683 [144/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:27:32.683 [145/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:27:32.683 [146/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:27:32.683 [147/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:27:32.683 [148/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:27:32.683 [149/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:27:32.683 [150/244] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:27:32.683 [151/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:27:32.683 [152/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:27:32.941 [153/244] Linking static target lib/librte_cmdline.a 00:27:32.941 [154/244] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:27:32.941 [155/244] Linking target lib/librte_cmdline.so.24.1 00:27:32.941 [156/244] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:27:33.199 [157/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:27:33.199 [158/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:27:33.199 [159/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:27:33.199 [160/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:27:33.457 [161/244] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:27:33.457 [162/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:27:33.457 [163/244] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:27:33.457 [164/244] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:27:33.457 [165/244] Linking static target lib/librte_compressdev.a 00:27:33.457 [166/244] Linking static target lib/librte_timer.a 00:27:33.457 [167/244] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:27:33.457 [168/244] Linking target lib/librte_compressdev.so.24.1 00:27:33.715 [169/244] Linking target lib/librte_timer.so.24.1 00:27:33.715 [170/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:27:33.715 [171/244] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:27:33.715 [172/244] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:27:33.715 [173/244] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:27:33.974 [174/244] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:27:33.974 [175/244] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:27:33.974 [176/244] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:27:33.974 [177/244] Linking static target lib/librte_dmadev.a 00:27:34.232 [178/244] Linking target lib/librte_dmadev.so.24.1 00:27:34.232 [179/244] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:27:34.232 [180/244] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:27:34.232 [181/244] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:27:34.490 [182/244] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:27:34.490 [183/244] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:27:34.490 [184/244] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:27:34.490 [185/244] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:27:34.490 [186/244] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:27:34.490 [187/244] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:27:34.490 [188/244] Linking static target lib/librte_reorder.a 00:27:34.490 [189/244] Linking static target lib/librte_power.a 00:27:34.748 [190/244] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:27:34.748 [191/244] Linking target lib/librte_reorder.so.24.1 00:27:34.748 [192/244] Linking static target lib/librte_security.a 00:27:35.006 [193/244] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:27:35.006 [194/244] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:27:35.006 [195/244] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:27:35.006 [196/244] Linking static target lib/librte_hash.a 00:27:35.006 [197/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:27:35.265 [198/244] Linking target lib/librte_hash.so.24.1 00:27:35.265 [199/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:27:35.265 [200/244] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:27:35.265 [201/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:27:35.265 [202/244] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:27:35.523 [203/244] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:27:35.523 [204/244] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:27:35.523 [205/244] Linking target lib/librte_ethdev.so.24.1 00:27:35.523 [206/244] Linking static target lib/librte_cryptodev.a 00:27:35.523 [207/244] Linking target lib/librte_cryptodev.so.24.1 00:27:35.523 [208/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:27:35.782 [209/244] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:27:35.782 [210/244] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:27:35.782 [211/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:27:35.782 [212/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:27:35.782 [213/244] Linking target lib/librte_security.so.24.1 00:27:35.782 [214/244] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:27:35.782 [215/244] Linking static target lib/librte_ethdev.a 00:27:35.782 [216/244] Linking target lib/librte_power.so.24.1 00:27:36.041 [217/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:27:36.041 [218/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:27:36.041 [219/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:27:36.301 [220/244] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:27:36.301 [221/244] Linking static target drivers/libtmp_rte_bus_vdev.a 00:27:36.301 [222/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:27:36.301 [223/244] Linking static target drivers/libtmp_rte_bus_pci.a 00:27:36.563 [224/244] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:27:36.563 [225/244] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:27:36.563 [226/244] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:27:36.563 [227/244] Linking static target drivers/librte_bus_vdev.a 00:27:36.563 [228/244] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:27:36.563 [229/244] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:27:36.563 [230/244] Linking target drivers/librte_bus_vdev.so.24.1 00:27:36.563 [231/244] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:27:36.563 [232/244] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:27:36.823 [233/244] Linking static target drivers/libtmp_rte_mempool_ring.a 00:27:36.823 [234/244] Linking static target drivers/librte_bus_pci.a 00:27:36.823 [235/244] Linking target drivers/librte_bus_pci.so.24.1 00:27:36.823 [236/244] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:27:37.081 [237/244] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:27:37.081 [238/244] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:27:37.081 [239/244] Linking static target drivers/librte_mempool_ring.a 00:27:37.081 [240/244] Linking target drivers/librte_mempool_ring.so.24.1 00:27:38.987 [241/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:27:45.602 [242/244] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:27:45.602 [243/244] Linking target lib/librte_vhost.so.24.1 00:27:45.602 [244/244] Linking static target lib/librte_vhost.a 00:27:45.602 INFO: autodetecting backend as ninja 00:27:45.602 INFO: calculating backend command to run: /usr/local/bin/ninja -C /mnt/sdadir/spdk/dpdk/build-tmp 00:27:50.927 CC lib/ut_mock/mock.o 00:27:50.927 CC lib/log/log.o 00:27:50.927 CC lib/log/log_flags.o 00:27:50.927 CC lib/log/log_deprecated.o 00:27:51.190 LIB libspdk_ut_mock.a 00:27:51.190 LIB libspdk_log.a 00:27:51.759 CC lib/util/base64.o 00:27:51.759 CC lib/dma/dma.o 00:27:51.759 CC lib/util/cpuset.o 00:27:51.759 CC lib/util/crc32.o 00:27:51.759 CC lib/util/bit_array.o 00:27:51.759 CC lib/util/crc16.o 00:27:51.759 CC lib/util/crc32c.o 00:27:51.759 CC lib/util/crc32_ieee.o 00:27:51.759 CXX lib/trace_parser/trace.o 00:27:51.759 CC lib/util/crc64.o 00:27:51.759 CC lib/util/dif.o 00:27:51.759 CC lib/util/fd.o 00:27:51.759 CC lib/util/fd_group.o 00:27:51.759 CC lib/util/file.o 00:27:51.759 CC lib/util/hexlify.o 00:27:51.759 CC lib/ioat/ioat.o 00:27:51.759 CC lib/util/iov.o 00:27:51.759 CC lib/util/net.o 00:27:51.759 CC lib/util/pipe.o 00:27:51.759 CC lib/util/math.o 00:27:51.759 CC lib/util/strerror_tls.o 00:27:51.759 CC lib/util/string.o 00:27:51.759 CC lib/util/uuid.o 00:27:51.759 CC lib/util/xor.o 00:27:51.759 CC lib/util/zipf.o 00:27:52.017 CC lib/vfio_user/host/vfio_user_pci.o 00:27:52.017 CC lib/vfio_user/host/vfio_user.o 00:27:52.276 LIB libspdk_dma.a 00:27:52.535 LIB libspdk_vfio_user.a 00:27:52.535 LIB libspdk_ioat.a 00:27:52.794 LIB libspdk_trace_parser.a 00:27:53.052 LIB libspdk_util.a 00:27:53.989 CC lib/vmd/vmd.o 00:27:53.989 CC lib/vmd/led.o 00:27:53.989 CC lib/conf/conf.o 00:27:53.989 CC lib/json/json_parse.o 00:27:53.989 CC lib/json/json_util.o 00:27:53.989 CC lib/json/json_write.o 00:27:53.989 CC lib/env_dpdk/env.o 00:27:53.989 CC lib/env_dpdk/memory.o 00:27:53.989 CC lib/env_dpdk/pci.o 00:27:53.989 CC lib/env_dpdk/init.o 00:27:53.989 CC lib/env_dpdk/threads.o 00:27:53.989 CC lib/env_dpdk/pci_ioat.o 00:27:53.989 CC lib/env_dpdk/pci_virtio.o 00:27:53.989 CC lib/env_dpdk/pci_vmd.o 00:27:53.989 CC lib/env_dpdk/pci_idxd.o 00:27:53.989 CC lib/env_dpdk/pci_event.o 00:27:53.989 CC lib/env_dpdk/sigbus_handler.o 00:27:53.989 CC lib/env_dpdk/pci_dpdk.o 00:27:53.989 CC lib/env_dpdk/pci_dpdk_2207.o 00:27:53.989 CC lib/env_dpdk/pci_dpdk_2211.o 00:27:54.556 LIB libspdk_conf.a 00:27:54.556 LIB libspdk_vmd.a 00:27:54.556 LIB libspdk_json.a 00:27:55.492 CC lib/jsonrpc/jsonrpc_server.o 00:27:55.492 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:27:55.492 CC lib/jsonrpc/jsonrpc_client.o 00:27:55.492 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:27:55.492 LIB libspdk_env_dpdk.a 00:27:55.769 LIB libspdk_jsonrpc.a 00:27:56.337 CC lib/rpc/rpc.o 00:27:56.597 LIB libspdk_rpc.a 00:27:57.166 CC lib/trace/trace.o 00:27:57.166 CC lib/trace/trace_flags.o 00:27:57.166 CC lib/trace/trace_rpc.o 00:27:57.166 CC lib/keyring/keyring.o 00:27:57.166 CC lib/keyring/keyring_rpc.o 00:27:57.166 CC lib/notify/notify.o 00:27:57.166 CC lib/notify/notify_rpc.o 00:27:57.425 LIB libspdk_notify.a 00:27:57.683 LIB libspdk_keyring.a 00:27:57.683 LIB libspdk_trace.a 00:27:58.249 CC lib/sock/sock.o 00:27:58.249 CC lib/sock/sock_rpc.o 00:27:58.249 CC lib/thread/thread.o 00:27:58.249 CC lib/thread/iobuf.o 00:27:58.815 LIB libspdk_sock.a 00:27:59.383 CC lib/nvme/nvme_ctrlr_cmd.o 00:27:59.383 CC lib/nvme/nvme_ctrlr.o 00:27:59.383 CC lib/nvme/nvme_fabric.o 00:27:59.383 CC lib/nvme/nvme_ns_cmd.o 00:27:59.383 CC lib/nvme/nvme_ns.o 00:27:59.383 CC lib/nvme/nvme_pcie_common.o 00:27:59.383 CC lib/nvme/nvme_pcie.o 00:27:59.383 CC lib/nvme/nvme_qpair.o 00:27:59.383 CC lib/nvme/nvme.o 00:27:59.383 CC lib/nvme/nvme_quirks.o 00:27:59.383 CC lib/nvme/nvme_transport.o 00:27:59.383 CC lib/nvme/nvme_discovery.o 00:27:59.383 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:27:59.383 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:27:59.383 CC lib/nvme/nvme_tcp.o 00:27:59.383 CC lib/nvme/nvme_opal.o 00:27:59.383 CC lib/nvme/nvme_io_msg.o 00:27:59.383 CC lib/nvme/nvme_zns.o 00:27:59.383 CC lib/nvme/nvme_poll_group.o 00:27:59.383 CC lib/nvme/nvme_auth.o 00:27:59.383 CC lib/nvme/nvme_stubs.o 00:27:59.383 CC lib/nvme/nvme_cuse.o 00:27:59.645 LIB libspdk_thread.a 00:28:00.583 CC lib/accel/accel.o 00:28:00.583 CC lib/accel/accel_rpc.o 00:28:00.583 CC lib/accel/accel_sw.o 00:28:00.583 CC lib/blob/blobstore.o 00:28:00.583 CC lib/blob/request.o 00:28:00.583 CC lib/blob/zeroes.o 00:28:00.583 CC lib/blob/blob_bs_dev.o 00:28:00.583 CC lib/init/json_config.o 00:28:00.583 CC lib/init/subsystem.o 00:28:00.583 CC lib/virtio/virtio.o 00:28:00.583 CC lib/virtio/virtio_vhost_user.o 00:28:00.583 CC lib/init/subsystem_rpc.o 00:28:00.583 CC lib/init/rpc.o 00:28:00.583 CC lib/virtio/virtio_vfio_user.o 00:28:00.583 CC lib/virtio/virtio_pci.o 00:28:01.520 LIB libspdk_init.a 00:28:01.779 LIB libspdk_virtio.a 00:28:02.348 CC lib/event/app.o 00:28:02.348 CC lib/event/log_rpc.o 00:28:02.348 CC lib/event/reactor.o 00:28:02.348 CC lib/event/app_rpc.o 00:28:02.348 CC lib/event/scheduler_static.o 00:28:02.348 LIB libspdk_accel.a 00:28:02.915 LIB libspdk_event.a 00:28:03.174 LIB libspdk_nvme.a 00:28:03.434 CC lib/bdev/bdev.o 00:28:03.434 CC lib/bdev/bdev_zone.o 00:28:03.434 CC lib/bdev/bdev_rpc.o 00:28:03.434 CC lib/bdev/part.o 00:28:03.434 CC lib/bdev/scsi_nvme.o 00:28:04.372 LIB libspdk_blob.a 00:28:05.774 CC lib/lvol/lvol.o 00:28:05.774 CC lib/blobfs/tree.o 00:28:05.774 CC lib/blobfs/blobfs.o 00:28:06.345 LIB libspdk_bdev.a 00:28:06.604 LIB libspdk_blobfs.a 00:28:06.604 LIB libspdk_lvol.a 00:28:07.985 CC lib/scsi/lun.o 00:28:07.985 CC lib/scsi/port.o 00:28:07.985 CC lib/scsi/scsi.o 00:28:07.985 CC lib/scsi/dev.o 00:28:07.985 CC lib/scsi/scsi_bdev.o 00:28:07.985 CC lib/scsi/scsi_rpc.o 00:28:07.985 CC lib/scsi/scsi_pr.o 00:28:07.985 CC lib/ftl/ftl_core.o 00:28:07.985 CC lib/scsi/task.o 00:28:07.985 CC lib/ftl/ftl_init.o 00:28:07.985 CC lib/ftl/ftl_layout.o 00:28:07.985 CC lib/ftl/ftl_debug.o 00:28:07.985 CC lib/ftl/ftl_io.o 00:28:07.985 CC lib/nbd/nbd.o 00:28:07.985 CC lib/ftl/ftl_sb.o 00:28:07.985 CC lib/nbd/nbd_rpc.o 00:28:07.985 CC lib/nvmf/ctrlr.o 00:28:07.985 CC lib/ftl/ftl_l2p.o 00:28:07.985 CC lib/nvmf/ctrlr_discovery.o 00:28:07.985 CC lib/nvmf/ctrlr_bdev.o 00:28:07.985 CC lib/ftl/ftl_nv_cache.o 00:28:07.985 CC lib/ftl/ftl_l2p_flat.o 00:28:07.985 CC lib/ftl/ftl_band.o 00:28:07.985 CC lib/nvmf/subsystem.o 00:28:07.985 CC lib/ftl/ftl_band_ops.o 00:28:07.985 CC lib/ftl/ftl_writer.o 00:28:07.985 CC lib/nvmf/nvmf.o 00:28:07.985 CC lib/ftl/ftl_rq.o 00:28:07.985 CC lib/nvmf/nvmf_rpc.o 00:28:07.985 CC lib/ftl/ftl_reloc.o 00:28:07.985 CC lib/nvmf/transport.o 00:28:07.985 CC lib/ftl/ftl_l2p_cache.o 00:28:07.985 CC lib/nvmf/tcp.o 00:28:07.985 CC lib/ftl/ftl_p2l.o 00:28:07.985 CC lib/nvmf/stubs.o 00:28:07.985 CC lib/ftl/mngt/ftl_mngt.o 00:28:07.985 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:28:07.985 CC lib/nvmf/mdns_server.o 00:28:07.985 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:28:07.985 CC lib/ftl/mngt/ftl_mngt_startup.o 00:28:07.985 CC lib/nvmf/auth.o 00:28:07.985 CC lib/ftl/mngt/ftl_mngt_md.o 00:28:07.985 CC lib/ftl/mngt/ftl_mngt_misc.o 00:28:07.985 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:28:07.985 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:28:07.985 CC lib/ftl/mngt/ftl_mngt_band.o 00:28:07.985 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:28:07.985 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:28:07.985 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:28:07.985 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:28:07.985 CC lib/ftl/utils/ftl_conf.o 00:28:07.985 CC lib/ftl/utils/ftl_md.o 00:28:07.985 CC lib/ftl/utils/ftl_mempool.o 00:28:07.985 CC lib/ftl/utils/ftl_bitmap.o 00:28:07.985 CC lib/ftl/utils/ftl_property.o 00:28:07.985 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:28:07.985 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:28:07.985 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:28:07.985 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:28:07.985 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:28:07.985 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:28:07.985 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:28:07.985 CC lib/ftl/upgrade/ftl_sb_v3.o 00:28:07.985 CC lib/ftl/upgrade/ftl_sb_v5.o 00:28:07.985 CC lib/ftl/nvc/ftl_nvc_dev.o 00:28:07.985 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:28:07.985 CC lib/ftl/base/ftl_base_dev.o 00:28:08.244 CC lib/ftl/base/ftl_base_bdev.o 00:28:10.151 LIB libspdk_scsi.a 00:28:10.151 LIB libspdk_nbd.a 00:28:10.151 LIB libspdk_ftl.a 00:28:10.715 CC lib/vhost/vhost_rpc.o 00:28:10.715 CC lib/vhost/vhost.o 00:28:10.715 CC lib/vhost/vhost_scsi.o 00:28:10.715 CC lib/vhost/vhost_blk.o 00:28:10.715 CC lib/vhost/rte_vhost_user.o 00:28:10.715 CC lib/iscsi/conn.o 00:28:10.715 CC lib/iscsi/iscsi.o 00:28:10.715 CC lib/iscsi/init_grp.o 00:28:10.715 CC lib/iscsi/md5.o 00:28:10.715 CC lib/iscsi/param.o 00:28:10.715 CC lib/iscsi/portal_grp.o 00:28:10.715 CC lib/iscsi/tgt_node.o 00:28:10.715 CC lib/iscsi/iscsi_subsystem.o 00:28:10.715 CC lib/iscsi/iscsi_rpc.o 00:28:10.715 CC lib/iscsi/task.o 00:28:11.293 LIB libspdk_nvmf.a 00:28:12.232 LIB libspdk_vhost.a 00:28:12.800 LIB libspdk_iscsi.a 00:28:16.998 CC module/env_dpdk/env_dpdk_rpc.o 00:28:16.998 CC module/scheduler/gscheduler/gscheduler.o 00:28:16.998 CC module/accel/error/accel_error.o 00:28:16.998 CC module/blob/bdev/blob_bdev.o 00:28:16.998 CC module/accel/error/accel_error_rpc.o 00:28:16.998 CC module/accel/ioat/accel_ioat.o 00:28:16.998 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:28:16.998 CC module/accel/ioat/accel_ioat_rpc.o 00:28:16.998 CC module/scheduler/dynamic/scheduler_dynamic.o 00:28:16.998 CC module/sock/posix/posix.o 00:28:16.998 CC module/keyring/file/keyring.o 00:28:16.998 CC module/keyring/file/keyring_rpc.o 00:28:16.998 CC module/keyring/linux/keyring.o 00:28:16.998 CC module/keyring/linux/keyring_rpc.o 00:28:16.998 LIB libspdk_env_dpdk_rpc.a 00:28:16.998 LIB libspdk_scheduler_gscheduler.a 00:28:16.998 LIB libspdk_scheduler_dpdk_governor.a 00:28:16.998 LIB libspdk_keyring_linux.a 00:28:16.998 LIB libspdk_keyring_file.a 00:28:16.998 LIB libspdk_accel_error.a 00:28:16.998 LIB libspdk_scheduler_dynamic.a 00:28:16.998 LIB libspdk_blob_bdev.a 00:28:16.998 LIB libspdk_accel_ioat.a 00:28:17.578 LIB libspdk_sock_posix.a 00:28:17.836 CC module/bdev/split/vbdev_split_rpc.o 00:28:17.836 CC module/bdev/split/vbdev_split.o 00:28:17.836 CC module/bdev/error/vbdev_error.o 00:28:17.836 CC module/bdev/error/vbdev_error_rpc.o 00:28:17.836 CC module/bdev/null/bdev_null.o 00:28:17.836 CC module/bdev/null/bdev_null_rpc.o 00:28:17.836 CC module/bdev/lvol/vbdev_lvol.o 00:28:17.836 CC module/blobfs/bdev/blobfs_bdev.o 00:28:17.836 CC module/bdev/zone_block/vbdev_zone_block.o 00:28:17.836 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:28:17.836 CC module/bdev/passthru/vbdev_passthru.o 00:28:17.836 CC module/bdev/delay/vbdev_delay.o 00:28:17.836 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:28:17.837 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:28:17.837 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:28:17.837 CC module/bdev/gpt/gpt.o 00:28:17.837 CC module/bdev/nvme/bdev_nvme.o 00:28:17.837 CC module/bdev/gpt/vbdev_gpt.o 00:28:17.837 CC module/bdev/aio/bdev_aio.o 00:28:17.837 CC module/bdev/raid/bdev_raid.o 00:28:17.837 CC module/bdev/nvme/bdev_nvme_rpc.o 00:28:17.837 CC module/bdev/nvme/nvme_rpc.o 00:28:17.837 CC module/bdev/aio/bdev_aio_rpc.o 00:28:17.837 CC module/bdev/nvme/bdev_mdns_client.o 00:28:17.837 CC module/bdev/delay/vbdev_delay_rpc.o 00:28:17.837 CC module/bdev/raid/bdev_raid_rpc.o 00:28:17.837 CC module/bdev/nvme/vbdev_opal.o 00:28:17.837 CC module/bdev/ftl/bdev_ftl.o 00:28:17.837 CC module/bdev/ftl/bdev_ftl_rpc.o 00:28:17.837 CC module/bdev/nvme/vbdev_opal_rpc.o 00:28:17.837 CC module/bdev/malloc/bdev_malloc.o 00:28:17.837 CC module/bdev/raid/bdev_raid_sb.o 00:28:17.837 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:28:17.837 CC module/bdev/malloc/bdev_malloc_rpc.o 00:28:17.837 CC module/bdev/raid/raid0.o 00:28:17.837 CC module/bdev/raid/raid1.o 00:28:17.837 CC module/bdev/virtio/bdev_virtio_scsi.o 00:28:17.837 CC module/bdev/raid/concat.o 00:28:18.095 CC module/bdev/virtio/bdev_virtio_blk.o 00:28:18.095 CC module/bdev/virtio/bdev_virtio_rpc.o 00:28:19.033 LIB libspdk_blobfs_bdev.a 00:28:19.033 LIB libspdk_bdev_split.a 00:28:19.033 LIB libspdk_bdev_malloc.a 00:28:19.291 LIB libspdk_bdev_aio.a 00:28:19.291 LIB libspdk_bdev_delay.a 00:28:19.291 LIB libspdk_bdev_null.a 00:28:19.291 LIB libspdk_bdev_gpt.a 00:28:19.291 LIB libspdk_bdev_error.a 00:28:19.291 LIB libspdk_bdev_passthru.a 00:28:19.291 LIB libspdk_bdev_ftl.a 00:28:19.291 LIB libspdk_bdev_zone_block.a 00:28:19.291 LIB libspdk_bdev_virtio.a 00:28:19.550 LIB libspdk_bdev_lvol.a 00:28:19.811 LIB libspdk_bdev_raid.a 00:28:21.191 LIB libspdk_bdev_nvme.a 00:28:23.126 CC module/event/subsystems/sock/sock.o 00:28:23.126 CC module/event/subsystems/vmd/vmd.o 00:28:23.126 CC module/event/subsystems/vmd/vmd_rpc.o 00:28:23.126 CC module/event/subsystems/iobuf/iobuf.o 00:28:23.126 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:28:23.126 CC module/event/subsystems/keyring/keyring.o 00:28:23.126 CC module/event/subsystems/scheduler/scheduler.o 00:28:23.126 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:28:23.126 LIB libspdk_event_vhost_blk.a 00:28:23.126 LIB libspdk_event_keyring.a 00:28:23.126 LIB libspdk_event_sock.a 00:28:23.126 LIB libspdk_event_scheduler.a 00:28:23.126 LIB libspdk_event_vmd.a 00:28:23.126 LIB libspdk_event_iobuf.a 00:28:24.059 CC module/event/subsystems/accel/accel.o 00:28:24.059 LIB libspdk_event_accel.a 00:28:24.623 CC module/event/subsystems/bdev/bdev.o 00:28:24.881 LIB libspdk_event_bdev.a 00:28:25.449 CC module/event/subsystems/nbd/nbd.o 00:28:25.449 CC module/event/subsystems/scsi/scsi.o 00:28:25.449 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:28:25.449 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:28:25.449 LIB libspdk_event_nbd.a 00:28:25.709 LIB libspdk_event_scsi.a 00:28:25.709 LIB libspdk_event_nvmf.a 00:28:26.281 CC module/event/subsystems/iscsi/iscsi.o 00:28:26.281 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:28:26.539 LIB libspdk_event_iscsi.a 00:28:26.539 LIB libspdk_event_vhost_scsi.a 00:28:26.798 make[1]: Nothing to be done for 'all'. 00:28:26.798 CC app/trace_record/trace_record.o 00:28:26.798 CC app/spdk_top/spdk_top.o 00:28:26.798 CXX app/trace/trace.o 00:28:26.798 CC app/spdk_nvme_perf/perf.o 00:28:26.798 CC app/spdk_lspci/spdk_lspci.o 00:28:26.798 CC app/spdk_nvme_discover/discovery_aer.o 00:28:26.798 CC app/spdk_nvme_identify/identify.o 00:28:27.057 CC app/iscsi_tgt/iscsi_tgt.o 00:28:27.057 CC examples/interrupt_tgt/interrupt_tgt.o 00:28:27.057 CC app/spdk_dd/spdk_dd.o 00:28:27.057 CC app/spdk_tgt/spdk_tgt.o 00:28:27.057 CC app/nvmf_tgt/nvmf_main.o 00:28:27.057 CC examples/util/zipf/zipf.o 00:28:27.057 CC examples/ioat/perf/perf.o 00:28:27.057 CC examples/ioat/verify/verify.o 00:28:27.317 LINK nvmf_tgt 00:28:27.317 LINK spdk_lspci 00:28:27.317 LINK iscsi_tgt 00:28:27.317 LINK zipf 00:28:27.578 LINK interrupt_tgt 00:28:27.578 LINK spdk_nvme_discover 00:28:27.578 LINK ioat_perf 00:28:27.578 LINK spdk_trace_record 00:28:27.578 LINK spdk_trace 00:28:27.578 LINK spdk_tgt 00:28:27.578 LINK verify 00:28:27.838 LINK spdk_dd 00:28:28.409 LINK spdk_nvme_perf 00:28:28.985 LINK spdk_nvme_identify 00:28:28.985 LINK spdk_top 00:28:29.564 CC app/vhost/vhost.o 00:28:30.133 LINK vhost 00:28:32.667 CC examples/vmd/lsvmd/lsvmd.o 00:28:32.667 CC examples/sock/hello_world/hello_sock.o 00:28:32.667 CC examples/vmd/led/led.o 00:28:32.667 CC examples/thread/thread/thread_ex.o 00:28:32.926 LINK lsvmd 00:28:32.926 LINK led 00:28:32.926 LINK hello_sock 00:28:33.185 LINK thread 00:28:41.308 CC examples/nvme/abort/abort.o 00:28:41.308 CC examples/nvme/hotplug/hotplug.o 00:28:41.308 CC examples/nvme/hello_world/hello_world.o 00:28:41.308 CC examples/nvme/arbitration/arbitration.o 00:28:41.308 CC examples/nvme/nvme_manage/nvme_manage.o 00:28:41.308 CC examples/nvme/reconnect/reconnect.o 00:28:41.308 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:28:41.308 CC examples/nvme/cmb_copy/cmb_copy.o 00:28:41.566 LINK cmb_copy 00:28:41.566 LINK pmr_persistence 00:28:41.566 LINK hello_world 00:28:41.566 LINK hotplug 00:28:42.134 LINK reconnect 00:28:42.134 LINK arbitration 00:28:42.134 LINK abort 00:28:42.393 LINK nvme_manage 00:28:46.588 CC examples/accel/perf/accel_perf.o 00:28:46.847 CC examples/blob/hello_world/hello_blob.o 00:28:46.847 CC examples/blob/cli/blobcli.o 00:28:47.415 LINK hello_blob 00:28:47.674 LINK accel_perf 00:28:47.933 LINK blobcli 00:28:56.062 CC examples/bdev/hello_world/hello_bdev.o 00:28:56.062 CC examples/bdev/bdevperf/bdevperf.o 00:28:56.062 LINK hello_bdev 00:28:56.321 LINK bdevperf 00:29:04.447 CC examples/nvmf/nvmf/nvmf.o 00:29:04.706 LINK nvmf 00:29:14.692 make: Leaving directory '/mnt/sdadir/spdk' 00:29:14.692 09:11:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@101 -- # rm -rf /mnt/sdadir/spdk 00:30:01.397 09:12:02 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@102 -- # umount /mnt/sdadir 00:30:01.397 09:12:02 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@103 -- # rm -rf /mnt/sdadir 00:30:01.397 09:12:02 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@105 -- # stats=($(cat "/sys/block/$dev/stat")) 00:30:01.397 09:12:02 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@105 -- # cat /sys/block/sda/stat 00:30:01.397 READ IO cnt: 44 merges: 0 sectors: 1232 ticks: 19 00:30:01.397 WRITE IO cnt: 634864 merges: 624889 sectors: 10858816 ticks: 585124 00:30:01.397 in flight: 0 io ticks: 249145 time in queue: 638921 00:30:01.397 09:12:02 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@107 -- # printf 'READ IO cnt: % 8u merges: % 8u sectors: % 8u ticks: % 8u\n' 44 0 1232 19 00:30:01.397 09:12:02 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@109 -- # printf 'WRITE IO cnt: % 8u merges: % 8u sectors: % 8u ticks: % 8u\n' 634864 624889 10858816 585124 00:30:01.397 09:12:02 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@111 -- # printf 'in flight: % 8u io ticks: % 8u time in queue: % 8u\n' 0 249145 638921 00:30:01.397 09:12:02 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@1 -- # cleanup 00:30:01.397 09:12:02 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_delete Nvme0n1 00:30:01.397 [2024-07-25 09:12:02.864495] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1p0) received event(SPDK_BDEV_EVENT_REMOVE) 00:30:01.397 09:12:02 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@13 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_delete EE_Malloc0 00:30:01.397 09:12:03 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:01.397 09:12:03 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@15 -- # killprocess 82256 00:30:01.397 09:12:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@950 -- # '[' -z 82256 ']' 00:30:01.397 09:12:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@954 -- # kill -0 82256 00:30:01.397 09:12:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@955 -- # uname 00:30:01.397 09:12:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:01.397 09:12:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82256 00:30:01.397 killing process with pid 82256 00:30:01.397 09:12:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:01.397 09:12:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:01.397 09:12:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82256' 00:30:01.397 09:12:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@969 -- # kill 82256 00:30:01.397 09:12:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@974 -- # wait 82256 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@17 -- # mountpoint -q /mnt/sdadir 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@18 -- # rm -rf /mnt/sdadir 00:30:01.397 Cleaning up iSCSI connection 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@20 -- # iscsicleanup 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:30:01.397 Logging out of session [sid: 72, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] 00:30:01.397 Logout of [sid: 72, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@985 -- # rm -rf 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@21 -- # iscsitestfini 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:30:01.397 00:30:01.397 real 6m37.238s 00:30:01.397 user 11m4.118s 00:30:01.397 sys 2m54.470s 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:30:01.397 ************************************ 00:30:01.397 END TEST iscsi_tgt_ext4test 00:30:01.397 ************************************ 00:30:01.397 09:12:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@49 -- # '[' 1 -eq 1 ']' 00:30:01.397 09:12:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@50 -- # hash ceph 00:30:01.397 09:12:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@54 -- # run_test iscsi_tgt_rbd /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd/rbd.sh 00:30:01.397 09:12:07 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:01.397 09:12:07 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:01.397 09:12:07 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:30:01.397 ************************************ 00:30:01.397 START TEST iscsi_tgt_rbd 00:30:01.397 ************************************ 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd/rbd.sh 00:30:01.397 * Looking for test storage... 00:30:01.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:30:01.397 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@11 -- # iscsitestinit 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@13 -- # timing_enter rbd_setup 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@14 -- # rbd_setup 10.0.0.1 spdk_iscsi_ns 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1007 -- # '[' -z 10.0.0.1 ']' 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1011 -- # '[' -n spdk_iscsi_ns ']' 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1012 -- # grep spdk_iscsi_ns 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1012 -- # ip netns list 00:30:01.398 spdk_iscsi_ns (id: 0) 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1013 -- # NS_CMD='ip netns exec spdk_iscsi_ns' 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1020 -- # hash ceph 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1021 -- # export PG_NUM=128 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1021 -- # PG_NUM=128 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1022 -- # export RBD_POOL=rbd 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1022 -- # RBD_POOL=rbd 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1023 -- # export RBD_NAME=foo 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1023 -- # RBD_NAME=foo 00:30:01.398 09:12:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1024 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:30:01.398 + base_dir=/var/tmp/ceph 00:30:01.398 + image=/var/tmp/ceph/ceph_raw.img 00:30:01.398 + dev=/dev/loop200 00:30:01.398 + pkill -9 ceph 00:30:01.398 + sleep 3 00:30:03.937 + umount /dev/loop200p2 00:30:03.937 umount: /dev/loop200p2: no mount point specified. 00:30:03.937 + losetup -d /dev/loop200 00:30:03.937 losetup: /dev/loop200: failed to use device: No such device 00:30:03.937 + rm -rf /var/tmp/ceph 00:30:03.937 09:12:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1025 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 10.0.0.1 00:30:03.937 + set -e 00:30:03.937 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:30:03.937 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:30:03.937 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:30:03.937 + base_dir=/var/tmp/ceph 00:30:03.937 + mon_ip=10.0.0.1 00:30:03.937 + mon_dir=/var/tmp/ceph/mon.a 00:30:03.937 + pid_dir=/var/tmp/ceph/pid 00:30:03.937 + ceph_conf=/var/tmp/ceph/ceph.conf 00:30:03.937 + mnt_dir=/var/tmp/ceph/mnt 00:30:03.937 + image=/var/tmp/ceph_raw.img 00:30:03.937 + dev=/dev/loop200 00:30:03.937 + modprobe loop 00:30:03.937 + umount /dev/loop200p2 00:30:03.937 umount: /dev/loop200p2: no mount point specified. 00:30:03.937 + true 00:30:03.937 + losetup -d /dev/loop200 00:30:03.937 losetup: /dev/loop200: failed to use device: No such device 00:30:03.937 + true 00:30:03.937 + '[' -d /var/tmp/ceph ']' 00:30:03.937 + mkdir /var/tmp/ceph 00:30:03.937 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:30:03.937 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:30:03.937 + fallocate -l 4G /var/tmp/ceph_raw.img 00:30:03.937 + mknod /dev/loop200 b 7 200 00:30:03.937 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:30:03.937 + PARTED='parted -s' 00:30:03.937 + SGDISK=sgdisk 00:30:03.937 Partitioning /dev/loop200 00:30:03.937 + echo 'Partitioning /dev/loop200' 00:30:03.937 + parted -s /dev/loop200 mktable gpt 00:30:03.937 + sleep 2 00:30:05.844 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:30:05.844 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:30:05.844 Setting name on /dev/loop200 00:30:05.844 + partno=0 00:30:05.844 + echo 'Setting name on /dev/loop200' 00:30:05.844 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:30:06.777 Warning: The kernel is still using the old partition table. 00:30:06.777 The new table will be used at the next reboot or after you 00:30:06.777 run partprobe(8) or kpartx(8) 00:30:06.777 The operation has completed successfully. 00:30:06.777 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:30:07.735 Warning: The kernel is still using the old partition table. 00:30:07.735 The new table will be used at the next reboot or after you 00:30:07.735 run partprobe(8) or kpartx(8) 00:30:07.735 The operation has completed successfully. 00:30:07.735 + kpartx /dev/loop200 00:30:07.735 loop200p1 : 0 4192256 /dev/loop200 2048 00:30:07.735 loop200p2 : 0 4192256 /dev/loop200 4194304 00:30:07.735 ++ ceph -v 00:30:07.735 ++ awk '{print $3}' 00:30:07.735 + ceph_version=17.2.7 00:30:07.735 + ceph_maj=17 00:30:07.735 + '[' 17 -gt 12 ']' 00:30:07.735 + update_config=true 00:30:07.735 + rm -f /var/log/ceph/ceph-mon.a.log 00:30:07.735 + set_min_mon_release='--set-min-mon-release 14' 00:30:07.735 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:30:07.735 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:30:07.735 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:30:07.735 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:30:07.735 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:30:07.735 = sectsz=512 attr=2, projid32bit=1 00:30:07.735 = crc=1 finobt=1, sparse=1, rmapbt=0 00:30:07.735 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:30:07.735 data = bsize=4096 blocks=524032, imaxpct=25 00:30:07.735 = sunit=0 swidth=0 blks 00:30:07.735 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:30:07.735 log =internal log bsize=4096 blocks=16384, version=2 00:30:07.735 = sectsz=512 sunit=0 blks, lazy-count=1 00:30:07.735 realtime =none extsz=4096 blocks=0, rtextents=0 00:30:07.735 Discarding blocks...Done. 00:30:07.735 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:30:07.994 + cat 00:30:07.994 + rm -rf '/var/tmp/ceph/mon.a/*' 00:30:07.994 + mkdir -p /var/tmp/ceph/mon.a 00:30:07.994 + mkdir -p /var/tmp/ceph/pid 00:30:07.994 + rm -f /etc/ceph/ceph.client.admin.keyring 00:30:07.994 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:30:07.994 creating /var/tmp/ceph/keyring 00:30:07.994 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:30:07.994 + monmaptool --create --clobber --add a 10.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:30:07.994 monmaptool: monmap file /var/tmp/ceph/monmap 00:30:07.994 monmaptool: generated fsid a7da20fc-9a60-49ec-b916-153c48ab6a7e 00:30:07.994 setting min_mon_release = octopus 00:30:07.994 epoch 0 00:30:07.994 fsid a7da20fc-9a60-49ec-b916-153c48ab6a7e 00:30:07.994 last_changed 2024-07-25T09:12:15.003222+0000 00:30:07.994 created 2024-07-25T09:12:15.003222+0000 00:30:07.994 min_mon_release 15 (octopus) 00:30:07.994 election_strategy: 1 00:30:07.994 0: v2:10.0.0.1:12046/0 mon.a 00:30:07.994 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:30:07.994 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:30:07.994 + '[' true = true ']' 00:30:07.994 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:30:07.994 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:30:07.994 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:30:07.994 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:30:07.994 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:30:07.994 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:30:07.994 ++ hostname 00:30:07.994 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:30:08.252 + true 00:30:08.252 + '[' true = true ']' 00:30:08.252 + ceph-conf --name mon.a --show-config-value log_file 00:30:08.252 /var/log/ceph/ceph-mon.a.log 00:30:08.252 ++ ceph -s 00:30:08.252 ++ grep id 00:30:08.252 ++ awk '{print $2}' 00:30:08.511 + fsid=a7da20fc-9a60-49ec-b916-153c48ab6a7e 00:30:08.511 + sed -i 's/perf = true/perf = true\n\tfsid = a7da20fc-9a60-49ec-b916-153c48ab6a7e \n/g' /var/tmp/ceph/ceph.conf 00:30:08.511 + (( ceph_maj < 18 )) 00:30:08.511 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:30:08.511 + cat /var/tmp/ceph/ceph.conf 00:30:08.511 [global] 00:30:08.511 debug_lockdep = 0/0 00:30:08.511 debug_context = 0/0 00:30:08.511 debug_crush = 0/0 00:30:08.511 debug_buffer = 0/0 00:30:08.511 debug_timer = 0/0 00:30:08.511 debug_filer = 0/0 00:30:08.511 debug_objecter = 0/0 00:30:08.511 debug_rados = 0/0 00:30:08.511 debug_rbd = 0/0 00:30:08.511 debug_ms = 0/0 00:30:08.511 debug_monc = 0/0 00:30:08.511 debug_tp = 0/0 00:30:08.511 debug_auth = 0/0 00:30:08.511 debug_finisher = 0/0 00:30:08.511 debug_heartbeatmap = 0/0 00:30:08.511 debug_perfcounter = 0/0 00:30:08.511 debug_asok = 0/0 00:30:08.511 debug_throttle = 0/0 00:30:08.511 debug_mon = 0/0 00:30:08.511 debug_paxos = 0/0 00:30:08.511 debug_rgw = 0/0 00:30:08.511 00:30:08.511 perf = true 00:30:08.511 osd objectstore = filestore 00:30:08.511 00:30:08.511 fsid = a7da20fc-9a60-49ec-b916-153c48ab6a7e 00:30:08.511 00:30:08.511 mutex_perf_counter = false 00:30:08.511 throttler_perf_counter = false 00:30:08.511 rbd cache = false 00:30:08.511 mon_allow_pool_delete = true 00:30:08.511 00:30:08.511 osd_pool_default_size = 1 00:30:08.511 00:30:08.511 [mon] 00:30:08.511 mon_max_pool_pg_num=166496 00:30:08.511 mon_osd_max_split_count = 10000 00:30:08.511 mon_pg_warn_max_per_osd = 10000 00:30:08.511 00:30:08.511 [osd] 00:30:08.511 osd_op_threads = 64 00:30:08.511 filestore_queue_max_ops=5000 00:30:08.511 filestore_queue_committing_max_ops=5000 00:30:08.511 journal_max_write_entries=1000 00:30:08.511 journal_queue_max_ops=3000 00:30:08.511 objecter_inflight_ops=102400 00:30:08.511 filestore_wbthrottle_enable=false 00:30:08.511 filestore_queue_max_bytes=1048576000 00:30:08.511 filestore_queue_committing_max_bytes=1048576000 00:30:08.511 journal_max_write_bytes=1048576000 00:30:08.511 journal_queue_max_bytes=1048576000 00:30:08.511 ms_dispatch_throttle_bytes=1048576000 00:30:08.511 objecter_inflight_op_bytes=1048576000 00:30:08.511 filestore_max_sync_interval=10 00:30:08.511 osd_client_message_size_cap = 0 00:30:08.511 osd_client_message_cap = 0 00:30:08.511 osd_enable_op_tracker = false 00:30:08.511 filestore_fd_cache_size = 10240 00:30:08.511 filestore_fd_cache_shards = 64 00:30:08.511 filestore_op_threads = 16 00:30:08.511 osd_op_num_shards = 48 00:30:08.511 osd_op_num_threads_per_shard = 2 00:30:08.511 osd_pg_object_context_cache_count = 10240 00:30:08.511 filestore_odsync_write = True 00:30:08.511 journal_dynamic_throttle = True 00:30:08.511 00:30:08.511 [osd.0] 00:30:08.511 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:30:08.511 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:30:08.511 00:30:08.511 # add mon address 00:30:08.511 [mon.a] 00:30:08.511 mon addr = v2:10.0.0.1:12046 00:30:08.511 + i=0 00:30:08.511 + mkdir -p /var/tmp/ceph/mnt 00:30:08.511 ++ uuidgen 00:30:08.511 + uuid=5f48839d-129e-4271-9bda-55b07c269396 00:30:08.511 + ceph -c /var/tmp/ceph/ceph.conf osd create 5f48839d-129e-4271-9bda-55b07c269396 0 00:30:08.770 0 00:30:08.770 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid 5f48839d-129e-4271-9bda-55b07c269396 --check-needs-journal --no-mon-config 00:30:08.770 2024-07-25T09:12:15.813+0000 7fb1a268a400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:30:08.770 2024-07-25T09:12:15.813+0000 7fb1a268a400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:30:08.770 2024-07-25T09:12:15.869+0000 7fb1a268a400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 5f48839d-129e-4271-9bda-55b07c269396, invalid (someone else's?) journal 00:30:09.028 2024-07-25T09:12:15.894+0000 7fb1a268a400 -1 journal do_read_entry(4096): bad header magic 00:30:09.028 2024-07-25T09:12:15.894+0000 7fb1a268a400 -1 journal do_read_entry(4096): bad header magic 00:30:09.028 ++ hostname 00:30:09.028 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:30:09.596 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:30:09.856 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:30:09.856 added key for osd.0 00:30:09.856 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:30:10.116 + class_dir=/lib64/rados-classes 00:30:10.116 + [[ -e /lib64/rados-classes ]] 00:30:10.116 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:30:10.376 + pkill -9 ceph-osd 00:30:10.376 + true 00:30:10.376 + sleep 2 00:30:12.911 + mkdir -p /var/tmp/ceph/pid 00:30:12.911 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:30:12.911 2024-07-25T09:12:19.508+0000 7f978387a400 -1 Falling back to public interface 00:30:12.911 2024-07-25T09:12:19.540+0000 7f978387a400 -1 journal do_read_entry(8192): bad header magic 00:30:12.911 2024-07-25T09:12:19.540+0000 7f978387a400 -1 journal do_read_entry(8192): bad header magic 00:30:12.911 2024-07-25T09:12:19.561+0000 7f978387a400 -1 osd.0 0 log_to_monitors true 00:30:12.911 09:12:19 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1027 -- # ip netns exec spdk_iscsi_ns ceph osd pool create rbd 128 00:30:13.847 pool 'rbd' created 00:30:13.847 09:12:20 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1028 -- # ip netns exec spdk_iscsi_ns rbd create foo --size 1000 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@15 -- # trap 'rbd_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@16 -- # timing_exit rbd_setup 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@20 -- # timing_enter start_iscsi_tgt 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@23 -- # pid=122390 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@22 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@25 -- # trap 'killprocess $pid; rbd_cleanup; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@27 -- # waitforlisten 122390 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@831 -- # '[' -z 122390 ']' 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:19.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:19.114 09:12:25 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:19.114 [2024-07-25 09:12:26.041511] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:19.114 [2024-07-25 09:12:26.041631] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122390 ] 00:30:19.114 [2024-07-25 09:12:26.207313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:19.681 [2024-07-25 09:12:26.492583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.681 [2024-07-25 09:12:26.492760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:19.681 [2024-07-25 09:12:26.492893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.681 [2024-07-25 09:12:26.492983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:19.940 09:12:26 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:19.940 09:12:26 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@864 -- # return 0 00:30:19.940 09:12:26 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@28 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:30:19.940 09:12:26 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.940 09:12:26 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:19.940 09:12:26 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.940 09:12:26 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@29 -- # rpc_cmd framework_start_init 00:30:19.940 09:12:26 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.940 09:12:26 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:20.875 09:12:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.875 iscsi_tgt is listening. Running tests... 00:30:20.875 09:12:27 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@30 -- # echo 'iscsi_tgt is listening. Running tests...' 00:30:20.875 09:12:27 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@32 -- # timing_exit start_iscsi_tgt 00:30:20.875 09:12:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:20.875 09:12:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:20.875 09:12:27 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@34 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:30:20.875 09:12:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.875 09:12:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:20.875 09:12:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.875 09:12:27 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@35 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:30:20.875 09:12:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.875 09:12:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:20.875 09:12:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.875 09:12:27 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@36 -- # rpc_cmd bdev_rbd_register_cluster iscsi_rbd_cluster --key-file /etc/ceph/ceph.client.admin.keyring --config-file /etc/ceph/ceph.conf 00:30:20.875 09:12:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.875 09:12:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:21.133 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.133 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@36 -- # rbd_cluster_name=iscsi_rbd_cluster 00:30:21.133 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@37 -- # rpc_cmd bdev_rbd_get_clusters_info -b iscsi_rbd_cluster 00:30:21.133 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.133 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:21.133 { 00:30:21.133 "cluster_name": "iscsi_rbd_cluster", 00:30:21.133 "config_file": "/etc/ceph/ceph.conf", 00:30:21.133 "key_file": "/etc/ceph/ceph.client.admin.keyring" 00:30:21.133 } 00:30:21.133 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.133 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@38 -- # rpc_cmd bdev_rbd_create rbd foo 4096 -c iscsi_rbd_cluster 00:30:21.133 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.133 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:21.133 [2024-07-25 09:12:28.065086] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:30:21.133 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.133 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@38 -- # rbd_bdev=Ceph0 00:30:21.133 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@39 -- # rpc_cmd bdev_get_bdevs 00:30:21.133 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.133 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:21.134 [ 00:30:21.134 { 00:30:21.134 "name": "Ceph0", 00:30:21.134 "aliases": [ 00:30:21.134 "2f9ab440-1b1e-4f9b-9359-468cf3a8cea9" 00:30:21.134 ], 00:30:21.134 "product_name": "Ceph Rbd Disk", 00:30:21.134 "block_size": 4096, 00:30:21.134 "num_blocks": 256000, 00:30:21.134 "uuid": "2f9ab440-1b1e-4f9b-9359-468cf3a8cea9", 00:30:21.134 "assigned_rate_limits": { 00:30:21.134 "rw_ios_per_sec": 0, 00:30:21.134 "rw_mbytes_per_sec": 0, 00:30:21.134 "r_mbytes_per_sec": 0, 00:30:21.134 "w_mbytes_per_sec": 0 00:30:21.134 }, 00:30:21.134 "claimed": false, 00:30:21.134 "zoned": false, 00:30:21.134 "supported_io_types": { 00:30:21.134 "read": true, 00:30:21.134 "write": true, 00:30:21.134 "unmap": true, 00:30:21.134 "flush": true, 00:30:21.134 "reset": true, 00:30:21.134 "nvme_admin": false, 00:30:21.134 "nvme_io": false, 00:30:21.134 "nvme_io_md": false, 00:30:21.134 "write_zeroes": true, 00:30:21.134 "zcopy": false, 00:30:21.134 "get_zone_info": false, 00:30:21.134 "zone_management": false, 00:30:21.134 "zone_append": false, 00:30:21.134 "compare": false, 00:30:21.134 "compare_and_write": true, 00:30:21.134 "abort": false, 00:30:21.134 "seek_hole": false, 00:30:21.134 "seek_data": false, 00:30:21.134 "copy": false, 00:30:21.134 "nvme_iov_md": false 00:30:21.134 }, 00:30:21.134 "driver_specific": { 00:30:21.134 "rbd": { 00:30:21.134 "pool_name": "rbd", 00:30:21.134 "rbd_name": "foo", 00:30:21.134 "config_file": "/etc/ceph/ceph.conf", 00:30:21.134 "key_file": "/etc/ceph/ceph.client.admin.keyring" 00:30:21.134 } 00:30:21.134 } 00:30:21.134 } 00:30:21.134 ] 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@41 -- # rpc_cmd bdev_rbd_resize Ceph0 2000 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:21.134 true 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # rpc_cmd bdev_get_bdevs 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # grep num_blocks 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # sed 's/[^[:digit:]]//g' 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # num_block=512000 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@44 -- # total_size=2000 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@45 -- # '[' 2000 '!=' 2000 ']' 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@53 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Ceph0:0 1:2 64 -d 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.134 09:12:28 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@54 -- # sleep 1 00:30:22.071 09:12:29 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@56 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:30:22.071 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:30:22.071 09:12:29 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@57 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:30:22.330 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:30:22.330 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:30:22.330 09:12:29 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@58 -- # waitforiscsidevices 1 00:30:22.330 09:12:29 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@116 -- # local num=1 00:30:22.330 09:12:29 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:30:22.330 09:12:29 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:30:22.330 [2024-07-25 09:12:29.228568] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:30:22.330 09:12:29 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:30:22.330 09:12:29 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:30:22.330 09:12:29 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # n=1 00:30:22.330 09:12:29 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:30:22.330 09:12:29 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@123 -- # return 0 00:30:22.330 09:12:29 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@60 -- # trap 'iscsicleanup; killprocess $pid; rbd_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:22.330 09:12:29 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:30:22.330 [global] 00:30:22.330 thread=1 00:30:22.330 invalidate=1 00:30:22.330 rw=randrw 00:30:22.330 time_based=1 00:30:22.330 runtime=1 00:30:22.330 ioengine=libaio 00:30:22.330 direct=1 00:30:22.330 bs=4096 00:30:22.330 iodepth=1 00:30:22.330 norandommap=0 00:30:22.330 numjobs=1 00:30:22.330 00:30:22.330 verify_dump=1 00:30:22.330 verify_backlog=512 00:30:22.330 verify_state_save=0 00:30:22.330 do_verify=1 00:30:22.330 verify=crc32c-intel 00:30:22.330 [job0] 00:30:22.330 filename=/dev/sda 00:30:22.330 queue_depth set to 113 (sda) 00:30:22.330 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:22.330 fio-3.35 00:30:22.330 Starting 1 thread 00:30:22.330 [2024-07-25 09:12:29.417372] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:30:23.711 [2024-07-25 09:12:30.528866] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:30:23.712 00:30:23.712 job0: (groupid=0, jobs=1): err= 0: pid=122510: Thu Jul 25 09:12:30 2024 00:30:23.712 read: IOPS=63, BW=254KiB/s (261kB/s)(256KiB/1006msec) 00:30:23.712 slat (usec): min=6, max=1071, avg=47.32, stdev=131.71 00:30:23.712 clat (usec): min=7, max=4177, avg=514.20, stdev=583.38 00:30:23.712 lat (usec): min=220, max=4210, avg=561.52, stdev=589.60 00:30:23.712 clat percentiles (usec): 00:30:23.712 | 1.00th=[ 8], 5.00th=[ 212], 10.00th=[ 229], 20.00th=[ 245], 00:30:23.712 | 30.00th=[ 265], 40.00th=[ 302], 50.00th=[ 330], 60.00th=[ 363], 00:30:23.712 | 70.00th=[ 408], 80.00th=[ 603], 90.00th=[ 1020], 95.00th=[ 1319], 00:30:23.712 | 99.00th=[ 4178], 99.50th=[ 4178], 99.90th=[ 4178], 99.95th=[ 4178], 00:30:23.712 | 99.99th=[ 4178] 00:30:23.712 bw ( KiB/s): min= 216, max= 296, per=100.00%, avg=256.00, stdev=56.57, samples=2 00:30:23.712 iops : min= 54, max= 74, avg=64.00, stdev=14.14, samples=2 00:30:23.712 write: IOPS=67, BW=270KiB/s (277kB/s)(272KiB/1006msec); 0 zone resets 00:30:23.712 slat (usec): min=8, max=324, avg=44.30, stdev=43.19 00:30:23.712 clat (usec): min=4338, max=27804, avg=14198.77, stdev=4134.45 00:30:23.712 lat (usec): min=4348, max=27848, avg=14243.07, stdev=4140.18 00:30:23.712 clat percentiles (usec): 00:30:23.712 | 1.00th=[ 4359], 5.00th=[ 7046], 10.00th=[ 9110], 20.00th=[11731], 00:30:23.712 | 30.00th=[12518], 40.00th=[13435], 50.00th=[13960], 60.00th=[15008], 00:30:23.712 | 70.00th=[15664], 80.00th=[16712], 90.00th=[19268], 95.00th=[21365], 00:30:23.712 | 99.00th=[27919], 99.50th=[27919], 99.90th=[27919], 99.95th=[27919], 00:30:23.712 | 99.99th=[27919] 00:30:23.712 bw ( KiB/s): min= 240, max= 296, per=99.12%, avg=268.00, stdev=39.60, samples=2 00:30:23.712 iops : min= 60, max= 74, avg=67.00, stdev= 9.90, samples=2 00:30:23.712 lat (usec) : 10=0.76%, 250=9.85%, 500=26.52%, 750=3.79%, 1000=2.27% 00:30:23.712 lat (msec) : 2=4.55%, 10=6.82%, 20=40.91%, 50=4.55% 00:30:23.712 cpu : usr=0.20%, sys=0.50%, ctx=143, majf=0, minf=1 00:30:23.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:23.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.712 issued rwts: total=64,68,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:23.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:23.712 00:30:23.712 Run status group 0 (all jobs): 00:30:23.712 READ: bw=254KiB/s (261kB/s), 254KiB/s-254KiB/s (261kB/s-261kB/s), io=256KiB (262kB), run=1006-1006msec 00:30:23.712 WRITE: bw=270KiB/s (277kB/s), 270KiB/s-270KiB/s (277kB/s-277kB/s), io=272KiB (279kB), run=1006-1006msec 00:30:23.712 00:30:23.712 Disk stats (read/write): 00:30:23.712 sda: ios=102/59, merge=0/0, ticks=43/856, in_queue=899, util=91.24% 00:30:23.712 09:12:30 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:30:23.712 [global] 00:30:23.712 thread=1 00:30:23.712 invalidate=1 00:30:23.712 rw=randrw 00:30:23.712 time_based=1 00:30:23.712 runtime=1 00:30:23.712 ioengine=libaio 00:30:23.712 direct=1 00:30:23.712 bs=131072 00:30:23.712 iodepth=32 00:30:23.712 norandommap=0 00:30:23.712 numjobs=1 00:30:23.712 00:30:23.712 verify_dump=1 00:30:23.712 verify_backlog=512 00:30:23.712 verify_state_save=0 00:30:23.712 do_verify=1 00:30:23.712 verify=crc32c-intel 00:30:23.712 [job0] 00:30:23.712 filename=/dev/sda 00:30:23.712 queue_depth set to 113 (sda) 00:30:23.712 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:30:23.712 fio-3.35 00:30:23.712 Starting 1 thread 00:30:23.712 [2024-07-25 09:12:30.748836] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:30:25.614 [2024-07-25 09:12:32.505902] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:30:25.614 00:30:25.614 job0: (groupid=0, jobs=1): err= 0: pid=122556: Thu Jul 25 09:12:32 2024 00:30:25.614 read: IOPS=81, BW=10.2MiB/s (10.7MB/s)(16.8MiB/1646msec) 00:30:25.614 slat (usec): min=6, max=1500, avg=54.90, stdev=144.28 00:30:25.614 clat (usec): min=3, max=47395, avg=2567.20, stdev=4862.36 00:30:25.614 lat (usec): min=253, max=47426, avg=2622.09, stdev=4850.37 00:30:25.614 clat percentiles (usec): 00:30:25.614 | 1.00th=[ 6], 5.00th=[ 297], 10.00th=[ 359], 20.00th=[ 408], 00:30:25.614 | 30.00th=[ 529], 40.00th=[ 791], 50.00th=[ 1057], 60.00th=[ 1516], 00:30:25.614 | 70.00th=[ 1811], 80.00th=[ 3359], 90.00th=[ 8029], 95.00th=[ 9503], 00:30:25.614 | 99.00th=[12911], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:30:25.614 | 99.99th=[47449] 00:30:25.614 bw ( KiB/s): min= 8704, max=25600, per=100.00%, avg=17152.00, stdev=11947.28, samples=2 00:30:25.614 iops : min= 68, max= 200, avg=134.00, stdev=93.34, samples=2 00:30:25.614 write: IOPS=76, BW=9798KiB/s (10.0MB/s)(15.8MiB/1646msec); 0 zone resets 00:30:25.614 slat (usec): min=41, max=1475, avg=172.04, stdev=229.52 00:30:25.614 clat (msec): min=27, max=1295, avg=409.55, stdev=385.23 00:30:25.614 lat (msec): min=27, max=1295, avg=409.72, stdev=385.21 00:30:25.614 clat percentiles (msec): 00:30:25.614 | 1.00th=[ 28], 5.00th=[ 52], 10.00th=[ 92], 20.00th=[ 140], 00:30:25.614 | 30.00th=[ 146], 40.00th=[ 155], 50.00th=[ 165], 60.00th=[ 317], 00:30:25.614 | 70.00th=[ 584], 80.00th=[ 751], 90.00th=[ 1083], 95.00th=[ 1234], 00:30:25.614 | 99.00th=[ 1301], 99.50th=[ 1301], 99.90th=[ 1301], 99.95th=[ 1301], 00:30:25.614 | 99.99th=[ 1301] 00:30:25.614 bw ( KiB/s): min= 254, max=18688, per=82.73%, avg=8106.00, stdev=9515.40, samples=3 00:30:25.614 iops : min= 1, max= 146, avg=63.00, stdev=74.75, samples=3 00:30:25.614 lat (usec) : 4=0.38%, 10=0.38%, 50=0.38%, 250=0.77%, 500=12.31% 00:30:25.614 lat (usec) : 750=5.77%, 1000=4.62% 00:30:25.614 lat (msec) : 2=12.31%, 4=5.38%, 10=7.69%, 20=1.15%, 50=1.92% 00:30:25.614 lat (msec) : 100=4.62%, 250=21.92%, 500=4.62%, 750=5.77%, 1000=3.85% 00:30:25.614 lat (msec) : 2000=6.15% 00:30:25.614 cpu : usr=0.73%, sys=0.30%, ctx=279, majf=0, minf=1 00:30:25.614 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.1%, 16=6.2%, 32=88.1%, >=64=0.0% 00:30:25.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.614 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.4%, 64=0.0%, >=64=0.0% 00:30:25.614 issued rwts: total=134,126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:25.614 latency : target=0, window=0, percentile=100.00%, depth=32 00:30:25.614 00:30:25.614 Run status group 0 (all jobs): 00:30:25.614 READ: bw=10.2MiB/s (10.7MB/s), 10.2MiB/s-10.2MiB/s (10.7MB/s-10.7MB/s), io=16.8MiB (17.6MB), run=1646-1646msec 00:30:25.614 WRITE: bw=9798KiB/s (10.0MB/s), 9798KiB/s-9798KiB/s (10.0MB/s-10.0MB/s), io=15.8MiB (16.5MB), run=1646-1646msec 00:30:25.614 00:30:25.614 Disk stats (read/write): 00:30:25.614 sda: ios=182/125, merge=0/0, ticks=311/41790, in_queue=42101, util=94.26% 00:30:25.614 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@65 -- # rm -f ./local-job0-0-verify.state 00:30:25.614 Cleaning up iSCSI connection 00:30:25.614 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@67 -- # trap - SIGINT SIGTERM EXIT 00:30:25.614 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@69 -- # iscsicleanup 00:30:25.614 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:30:25.614 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:30:25.614 Logging out of session [sid: 73, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:30:25.614 Logout of [sid: 73, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:30:25.614 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:30:25.614 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@985 -- # rm -rf 00:30:25.614 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@70 -- # rpc_cmd bdev_rbd_delete Ceph0 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:25.615 [2024-07-25 09:12:32.640655] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Ceph0) received event(SPDK_BDEV_EVENT_REMOVE) 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@71 -- # rpc_cmd bdev_rbd_unregister_cluster iscsi_rbd_cluster 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@72 -- # killprocess 122390 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@950 -- # '[' -z 122390 ']' 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@954 -- # kill -0 122390 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@955 -- # uname 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 122390 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 122390' 00:30:25.615 killing process with pid 122390 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@969 -- # kill 122390 00:30:25.615 09:12:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@974 -- # wait 122390 00:30:28.902 09:12:35 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@73 -- # rbd_cleanup 00:30:28.902 09:12:35 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1033 -- # hash ceph 00:30:28.902 09:12:35 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1034 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:30:28.902 + base_dir=/var/tmp/ceph 00:30:28.902 + image=/var/tmp/ceph/ceph_raw.img 00:30:28.902 + dev=/dev/loop200 00:30:28.902 + pkill -9 ceph 00:30:28.902 + sleep 3 00:30:32.192 + umount /dev/loop200p2 00:30:32.192 umount: /dev/loop200p2: not mounted. 00:30:32.192 + losetup -d /dev/loop200 00:30:32.192 + rm -rf /var/tmp/ceph 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1035 -- # rm -f /var/tmp/ceph_raw.img 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@75 -- # iscsitestfini 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:30:32.192 00:30:32.192 real 0m31.451s 00:30:32.192 user 0m36.850s 00:30:32.192 sys 0m2.238s 00:30:32.192 ************************************ 00:30:32.192 END TEST iscsi_tgt_rbd 00:30:32.192 ************************************ 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:32.192 09:12:38 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@57 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:30:32.192 09:12:38 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@59 -- # '[' 1 -eq 1 ']' 00:30:32.192 09:12:38 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@60 -- # run_test iscsi_tgt_initiator /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator/initiator.sh 00:30:32.192 09:12:38 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:32.192 09:12:38 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:32.192 09:12:38 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:30:32.192 ************************************ 00:30:32.192 START TEST iscsi_tgt_initiator 00:30:32.192 ************************************ 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator/initiator.sh 00:30:32.192 * Looking for test storage... 00:30:32.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@11 -- # iscsitestinit 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@16 -- # timing_enter start_iscsi_tgt 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@19 -- # pid=122719 00:30:32.192 iSCSI target launched. pid: 122719 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@18 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@20 -- # echo 'iSCSI target launched. pid: 122719' 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@21 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:30:32.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@22 -- # waitforlisten 122719 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@831 -- # '[' -z 122719 ']' 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:32.192 09:12:38 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:30:32.192 [2024-07-25 09:12:39.084528] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:32.193 [2024-07-25 09:12:39.084692] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122719 ] 00:30:32.452 [2024-07-25 09:12:39.447223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.712 [2024-07-25 09:12:39.669738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.712 09:12:39 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:32.712 09:12:39 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@864 -- # return 0 00:30:32.712 09:12:39 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@23 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:30:32.712 09:12:39 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.712 09:12:39 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:30:32.971 09:12:39 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.971 09:12:39 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@24 -- # rpc_cmd framework_start_init 00:30:32.971 09:12:39 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.971 09:12:39 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:30:33.554 iscsi_tgt is listening. Running tests... 00:30:33.554 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.554 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@25 -- # echo 'iscsi_tgt is listening. Running tests...' 00:30:33.554 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@27 -- # timing_exit start_iscsi_tgt 00:30:33.554 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:33.554 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:30:33.812 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@29 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:30:33.812 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.812 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:30:33.812 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.812 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@30 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:30:33.812 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.812 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:30:33.812 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.813 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@31 -- # rpc_cmd bdev_malloc_create 64 512 00:30:33.813 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.813 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:30:33.813 Malloc0 00:30:33.813 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.813 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@36 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:30:33.813 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.813 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:30:33.813 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.813 09:12:40 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@37 -- # sleep 1 00:30:34.749 09:12:41 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@38 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:30:34.749 09:12:41 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 5 -s 512 00:30:34.749 09:12:41 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@40 -- # initiator_json_config 00:30:34.749 09:12:41 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:30:35.009 [2024-07-25 09:12:41.953410] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:35.009 [2024-07-25 09:12:41.953743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122769 ] 00:30:35.268 [2024-07-25 09:12:42.319171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.527 [2024-07-25 09:12:42.561749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.786 Running I/O for 5 seconds... 00:30:41.081 00:30:41.081 Latency(us) 00:30:41.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.081 Job: iSCSI0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:41.081 Verification LBA range: start 0x0 length 0x4000 00:30:41.081 iSCSI0 : 5.01 19048.35 74.41 0.00 0.00 6693.66 1387.99 14423.64 00:30:41.081 =================================================================================================================== 00:30:41.081 Total : 19048.35 74.41 0.00 0.00 6693.66 1387.99 14423.64 00:30:42.982 09:12:49 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w unmap -t 5 -s 512 00:30:42.982 09:12:49 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@41 -- # initiator_json_config 00:30:42.982 09:12:49 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:30:42.982 [2024-07-25 09:12:49.740216] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:42.982 [2024-07-25 09:12:49.740474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122866 ] 00:30:42.982 [2024-07-25 09:12:50.094315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.240 [2024-07-25 09:12:50.337312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.806 Running I/O for 5 seconds... 00:30:49.095 00:30:49.095 Latency(us) 00:30:49.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.095 Job: iSCSI0 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:30:49.095 iSCSI0 : 5.00 30573.46 119.43 0.00 0.00 4182.78 894.32 9043.40 00:30:49.095 =================================================================================================================== 00:30:49.095 Total : 30573.46 119.43 0.00 0.00 4182.78 894.32 9043.40 00:30:50.474 09:12:57 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w flush -t 5 -s 512 00:30:50.474 09:12:57 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@42 -- # initiator_json_config 00:30:50.474 09:12:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:30:50.474 [2024-07-25 09:12:57.530069] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:50.474 [2024-07-25 09:12:57.530236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122946 ] 00:30:51.043 [2024-07-25 09:12:57.888483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.043 [2024-07-25 09:12:58.135827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.612 Running I/O for 5 seconds... 00:30:56.887 00:30:56.887 Latency(us) 00:30:56.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.887 Job: iSCSI0 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:30:56.887 iSCSI0 : 5.00 63216.15 246.94 0.00 0.00 2022.34 704.73 2532.72 00:30:56.887 =================================================================================================================== 00:30:56.887 Total : 63216.15 246.94 0.00 0.00 2022.34 704.73 2532.72 00:30:58.281 09:13:05 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w reset -t 10 -s 512 00:30:58.281 09:13:05 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@43 -- # initiator_json_config 00:30:58.281 09:13:05 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:30:58.281 [2024-07-25 09:13:05.328241] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:58.281 [2024-07-25 09:13:05.328515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123032 ] 00:30:58.848 [2024-07-25 09:13:05.688204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.848 [2024-07-25 09:13:05.933262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.416 Running I/O for 10 seconds... 00:31:09.394 00:31:09.394 Latency(us) 00:31:09.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.394 Job: iSCSI0 (Core Mask 0x1, workload: reset, depth: 128, IO size: 4096) 00:31:09.394 Verification LBA range: start 0x0 length 0x4000 00:31:09.394 iSCSI0 : 10.00 18379.59 71.80 0.00 0.00 6938.09 1187.66 4521.70 00:31:09.394 =================================================================================================================== 00:31:09.394 Total : 18379.59 71.80 0.00 0.00 6938.09 1187.66 4521.70 00:31:11.298 09:13:17 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:31:11.298 09:13:17 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@47 -- # killprocess 122719 00:31:11.298 09:13:17 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@950 -- # '[' -z 122719 ']' 00:31:11.298 09:13:17 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@954 -- # kill -0 122719 00:31:11.298 09:13:17 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@955 -- # uname 00:31:11.298 09:13:17 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:11.298 09:13:17 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 122719 00:31:11.298 killing process with pid 122719 00:31:11.298 09:13:18 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:11.298 09:13:18 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:11.298 09:13:18 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@968 -- # echo 'killing process with pid 122719' 00:31:11.298 09:13:18 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@969 -- # kill 122719 00:31:11.298 09:13:18 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@974 -- # wait 122719 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@49 -- # iscsitestfini 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:31:14.588 00:31:14.588 real 0m42.193s 00:31:14.588 user 1m4.584s 00:31:14.588 sys 0m10.545s 00:31:14.588 ************************************ 00:31:14.588 END TEST iscsi_tgt_initiator 00:31:14.588 ************************************ 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:31:14.588 09:13:21 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@61 -- # run_test iscsi_tgt_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait/bdev_io_wait.sh 00:31:14.588 09:13:21 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:14.588 09:13:21 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:14.588 09:13:21 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:31:14.588 ************************************ 00:31:14.588 START TEST iscsi_tgt_bdev_io_wait 00:31:14.588 ************************************ 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait/bdev_io_wait.sh 00:31:14.588 * Looking for test storage... 00:31:14.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@11 -- # iscsitestinit 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@16 -- # timing_enter start_iscsi_tgt 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@19 -- # pid=123238 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@18 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:31:14.588 iSCSI target launched. pid: 123238 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@20 -- # echo 'iSCSI target launched. pid: 123238' 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@21 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@22 -- # waitforlisten 123238 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 123238 ']' 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:14.588 09:13:21 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:14.588 [2024-07-25 09:13:21.330220] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:14.589 [2024-07-25 09:13:21.330399] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123238 ] 00:31:14.589 [2024-07-25 09:13:21.689970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.853 [2024-07-25 09:13:21.938039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.117 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:15.117 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:31:15.117 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@23 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:31:15.117 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.117 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:15.117 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.117 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@25 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:15.117 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.117 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:15.117 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.117 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@26 -- # rpc_cmd framework_start_init 00:31:15.117 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.117 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.053 iscsi_tgt is listening. Running tests... 00:31:16.053 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.053 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@27 -- # echo 'iscsi_tgt is listening. Running tests...' 00:31:16.053 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@29 -- # timing_exit start_iscsi_tgt 00:31:16.053 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:16.053 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.053 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@31 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:31:16.053 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.053 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.053 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.053 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@32 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:31:16.053 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.053 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.053 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.053 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@33 -- # rpc_cmd bdev_malloc_create 64 512 00:31:16.053 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.053 09:13:22 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.053 Malloc0 00:31:16.053 09:13:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.053 09:13:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@38 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:31:16.053 09:13:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.053 09:13:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.053 09:13:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.053 09:13:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@39 -- # sleep 1 00:31:16.987 09:13:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@40 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:31:16.987 09:13:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w write -t 1 00:31:16.987 09:13:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@42 -- # initiator_json_config 00:31:16.987 09:13:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:31:17.246 [2024-07-25 09:13:24.197558] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:17.246 [2024-07-25 09:13:24.197835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123289 ] 00:31:17.246 [2024-07-25 09:13:24.366116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.814 [2024-07-25 09:13:24.642282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.073 Running I/O for 1 seconds... 00:31:19.011 00:31:19.011 Latency(us) 00:31:19.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.011 Job: iSCSI0 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:31:19.011 iSCSI0 : 1.00 28333.70 110.68 0.00 0.00 4506.13 1244.90 6238.80 00:31:19.011 =================================================================================================================== 00:31:19.011 Total : 28333.70 110.68 0.00 0.00 4506.13 1244.90 6238.80 00:31:20.920 09:13:27 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w read -t 1 00:31:20.920 09:13:27 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@43 -- # initiator_json_config 00:31:20.920 09:13:27 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:31:20.920 [2024-07-25 09:13:27.659758] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:20.920 [2024-07-25 09:13:27.659992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123328 ] 00:31:20.920 [2024-07-25 09:13:27.813956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.179 [2024-07-25 09:13:28.093038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.437 Running I/O for 1 seconds... 00:31:22.811 00:31:22.811 Latency(us) 00:31:22.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.811 Job: iSCSI0 (Core Mask 0x1, workload: read, depth: 128, IO size: 4096) 00:31:22.811 iSCSI0 : 1.00 37455.56 146.31 0.00 0.00 3409.24 829.93 4349.99 00:31:22.811 =================================================================================================================== 00:31:22.811 Total : 37455.56 146.31 0.00 0.00 3409.24 829.93 4349.99 00:31:24.237 09:13:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w flush -t 1 00:31:24.237 09:13:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@44 -- # initiator_json_config 00:31:24.237 09:13:31 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:31:24.237 [2024-07-25 09:13:31.140587] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:24.237 [2024-07-25 09:13:31.140765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123366 ] 00:31:24.237 [2024-07-25 09:13:31.306095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.495 [2024-07-25 09:13:31.588651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.079 Running I/O for 1 seconds... 00:31:26.016 00:31:26.016 Latency(us) 00:31:26.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.016 Job: iSCSI0 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:31:26.016 iSCSI0 : 1.00 47034.48 183.73 0.00 0.00 2715.96 754.81 3477.13 00:31:26.016 =================================================================================================================== 00:31:26.016 Total : 47034.48 183.73 0.00 0.00 2715.96 754.81 3477.13 00:31:27.391 09:13:34 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w unmap -t 1 00:31:27.391 09:13:34 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@45 -- # initiator_json_config 00:31:27.391 09:13:34 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:31:27.649 [2024-07-25 09:13:34.627261] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:27.649 [2024-07-25 09:13:34.627436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123404 ] 00:31:27.907 [2024-07-25 09:13:34.795490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.165 [2024-07-25 09:13:35.075466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.423 Running I/O for 1 seconds... 00:31:29.796 00:31:29.796 Latency(us) 00:31:29.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.796 Job: iSCSI0 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:31:29.796 iSCSI0 : 1.00 23469.65 91.68 0.00 0.00 5440.55 858.55 6868.40 00:31:29.796 =================================================================================================================== 00:31:29.797 Total : 23469.65 91.68 0.00 0.00 5440.55 858.55 6868.40 00:31:31.174 09:13:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:31:31.174 09:13:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@49 -- # killprocess 123238 00:31:31.174 09:13:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 123238 ']' 00:31:31.174 09:13:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 123238 00:31:31.174 09:13:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:31:31.174 09:13:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:31.174 09:13:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123238 00:31:31.174 killing process with pid 123238 00:31:31.174 09:13:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:31.174 09:13:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:31.174 09:13:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123238' 00:31:31.174 09:13:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 123238 00:31:31.174 09:13:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 123238 00:31:33.709 09:13:40 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@51 -- # iscsitestfini 00:31:33.709 09:13:40 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:31:33.709 ************************************ 00:31:33.709 END TEST iscsi_tgt_bdev_io_wait 00:31:33.709 ************************************ 00:31:33.709 00:31:33.709 real 0m19.605s 00:31:33.709 user 0m29.339s 00:31:33.709 sys 0m3.655s 00:31:33.709 09:13:40 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:33.709 09:13:40 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:33.709 09:13:40 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@62 -- # run_test iscsi_tgt_resize /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize/resize.sh 00:31:33.709 09:13:40 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:33.709 09:13:40 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:33.709 09:13:40 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:31:33.709 ************************************ 00:31:33.709 START TEST iscsi_tgt_resize 00:31:33.709 ************************************ 00:31:33.709 09:13:40 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize/resize.sh 00:31:33.966 * Looking for test storage... 00:31:33.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize 00:31:33.966 09:13:40 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:31:33.966 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@12 -- # iscsitestinit 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@14 -- # BDEV_SIZE=64 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@15 -- # BDEV_NEW_SIZE=128 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@16 -- # BLOCK_SIZE=512 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@17 -- # RESIZE_SOCK=/var/tmp/spdk-resize.sock 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@19 -- # timing_enter start_iscsi_tgt 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@22 -- # rm -f /var/tmp/spdk-resize.sock 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@25 -- # pid=123524 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:31:33.967 iSCSI target launched. pid: 123524 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@26 -- # echo 'iSCSI target launched. pid: 123524' 00:31:33.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@27 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@28 -- # waitforlisten 123524 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@831 -- # '[' -z 123524 ']' 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:33.967 09:13:40 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:31:33.967 [2024-07-25 09:13:40.993959] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:33.967 [2024-07-25 09:13:40.994786] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123524 ] 00:31:34.225 [2024-07-25 09:13:41.268446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.483 [2024-07-25 09:13:41.516425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.743 09:13:41 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:34.743 09:13:41 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@864 -- # return 0 00:31:34.743 09:13:41 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@29 -- # rpc_cmd framework_start_init 00:31:34.743 09:13:41 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.743 09:13:41 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@30 -- # echo 'iscsi_tgt is listening. Running tests...' 00:31:35.679 iscsi_tgt is listening. Running tests... 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@32 -- # timing_exit start_iscsi_tgt 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@34 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@35 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@36 -- # rpc_cmd bdev_null_create Null0 64 512 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:31:35.679 Null0 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@41 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Null0:0 1:2 256 -d 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.679 09:13:42 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@42 -- # sleep 1 00:31:37.048 09:13:43 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@43 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:31:37.048 09:13:43 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@47 -- # bdevperf_pid=123567 00:31:37.048 09:13:43 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@48 -- # waitforlisten 123567 /var/tmp/spdk-resize.sock 00:31:37.048 09:13:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@831 -- # '[' -z 123567 ']' 00:31:37.048 09:13:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-resize.sock 00:31:37.048 09:13:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:37.048 09:13:43 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-resize.sock --json /dev/fd/63 -q 16 -o 4096 -w read -t 5 -R -s 128 -z 00:31:37.048 09:13:43 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@46 -- # initiator_json_config 00:31:37.048 09:13:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-resize.sock...' 00:31:37.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-resize.sock... 00:31:37.048 09:13:43 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@139 -- # jq . 00:31:37.048 09:13:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:37.048 09:13:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:31:37.048 [2024-07-25 09:13:43.885608] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:37.048 [2024-07-25 09:13:43.886326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 128 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123567 ] 00:31:37.048 [2024-07-25 09:13:44.098539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.309 [2024-07-25 09:13:44.348381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.877 09:13:44 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:37.877 09:13:44 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@864 -- # return 0 00:31:37.877 09:13:44 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@50 -- # rpc_cmd bdev_null_resize Null0 128 00:31:37.877 09:13:44 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.877 09:13:44 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:31:37.877 [2024-07-25 09:13:44.825732] lun.c: 402:bdev_event_cb: *NOTICE*: bdev name (Null0) received event(SPDK_BDEV_EVENT_RESIZE) 00:31:37.877 true 00:31:37.877 09:13:44 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.877 09:13:44 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # rpc_cmd -s /var/tmp/spdk-resize.sock bdev_get_bdevs 00:31:37.877 09:13:44 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # jq '.[].num_blocks' 00:31:37.877 09:13:44 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.877 09:13:44 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:31:37.877 09:13:44 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.877 09:13:44 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # num_block=131072 00:31:37.877 09:13:44 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@54 -- # total_size=64 00:31:37.877 09:13:44 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@55 -- # '[' 64 '!=' 64 ']' 00:31:37.877 09:13:44 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@59 -- # sleep 2 00:31:39.782 09:13:46 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@61 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-resize.sock perform_tests 00:31:40.041 Running I/O for 5 seconds... 00:31:45.314 00:31:45.314 Latency(us) 00:31:45.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.314 Job: iSCSI0 (Core Mask 0x1, workload: read, depth: 16, IO size: 4096) 00:31:45.314 iSCSI0 : 5.00 46235.80 180.61 0.00 0.00 343.51 150.25 1001.64 00:31:45.314 =================================================================================================================== 00:31:45.314 Total : 46235.80 180.61 0.00 0.00 343.51 150.25 1001.64 00:31:45.314 0 00:31:45.314 09:13:51 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # rpc_cmd -s /var/tmp/spdk-resize.sock bdev_get_bdevs 00:31:45.314 09:13:51 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # jq '.[].num_blocks' 00:31:45.314 09:13:51 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.314 09:13:51 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:31:45.314 09:13:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.314 09:13:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # num_block=262144 00:31:45.314 09:13:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@65 -- # total_size=128 00:31:45.314 09:13:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@66 -- # '[' 128 '!=' 128 ']' 00:31:45.314 09:13:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:31:45.314 09:13:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@72 -- # killprocess 123567 00:31:45.314 09:13:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@950 -- # '[' -z 123567 ']' 00:31:45.314 09:13:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # kill -0 123567 00:31:45.314 09:13:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@955 -- # uname 00:31:45.314 09:13:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:45.314 09:13:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123567 00:31:45.314 09:13:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:45.314 09:13:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:45.314 killing process with pid 123567 00:31:45.314 09:13:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123567' 00:31:45.314 09:13:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@969 -- # kill 123567 00:31:45.314 Received shutdown signal, test time was about 5.000000 seconds 00:31:45.314 00:31:45.314 Latency(us) 00:31:45.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.314 =================================================================================================================== 00:31:45.314 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:45.314 09:13:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@974 -- # wait 123567 00:31:46.693 09:13:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@73 -- # killprocess 123524 00:31:46.693 09:13:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@950 -- # '[' -z 123524 ']' 00:31:46.693 09:13:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # kill -0 123524 00:31:46.693 09:13:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@955 -- # uname 00:31:46.693 09:13:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:46.693 09:13:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123524 00:31:46.693 09:13:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:46.693 09:13:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:46.693 killing process with pid 123524 00:31:46.693 09:13:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123524' 00:31:46.693 09:13:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@969 -- # kill 123524 00:31:46.693 09:13:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@974 -- # wait 123524 00:31:49.997 09:13:56 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@75 -- # iscsitestfini 00:31:49.997 09:13:56 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:31:49.997 00:31:49.997 real 0m15.803s 00:31:49.997 user 0m22.783s 00:31:49.997 sys 0m3.016s 00:31:49.997 09:13:56 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:49.997 09:13:56 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:31:49.997 ************************************ 00:31:49.997 END TEST iscsi_tgt_resize 00:31:49.997 ************************************ 00:31:49.997 09:13:56 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@65 -- # cleanup_veth_interfaces 00:31:49.997 09:13:56 iscsi_tgt -- iscsi_tgt/common.sh@95 -- # ip link set init_br nomaster 00:31:49.997 09:13:56 iscsi_tgt -- iscsi_tgt/common.sh@96 -- # ip link set tgt_br nomaster 00:31:49.997 09:13:56 iscsi_tgt -- iscsi_tgt/common.sh@97 -- # ip link set tgt_br2 nomaster 00:31:49.997 09:13:56 iscsi_tgt -- iscsi_tgt/common.sh@98 -- # ip link set init_br down 00:31:49.997 09:13:56 iscsi_tgt -- iscsi_tgt/common.sh@99 -- # ip link set tgt_br down 00:31:49.997 09:13:56 iscsi_tgt -- iscsi_tgt/common.sh@100 -- # ip link set tgt_br2 down 00:31:49.997 09:13:56 iscsi_tgt -- iscsi_tgt/common.sh@101 -- # ip link delete iscsi_br type bridge 00:31:49.997 09:13:56 iscsi_tgt -- iscsi_tgt/common.sh@102 -- # ip link delete spdk_init_int 00:31:49.997 09:13:56 iscsi_tgt -- iscsi_tgt/common.sh@103 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:31:49.997 09:13:56 iscsi_tgt -- iscsi_tgt/common.sh@104 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:31:49.997 09:13:56 iscsi_tgt -- iscsi_tgt/common.sh@105 -- # ip netns del spdk_iscsi_ns 00:31:49.997 09:13:56 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:31:49.997 00:31:49.997 real 23m6.261s 00:31:49.997 user 41m28.260s 00:31:49.997 sys 7m26.683s 00:31:49.997 09:13:56 iscsi_tgt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:49.997 ************************************ 00:31:49.997 END TEST iscsi_tgt 00:31:49.997 09:13:56 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:31:49.997 ************************************ 00:31:49.997 09:13:56 -- spdk/autotest.sh@268 -- # run_test spdkcli_iscsi /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:31:49.997 09:13:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:49.997 09:13:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:49.997 09:13:56 -- common/autotest_common.sh@10 -- # set +x 00:31:49.997 ************************************ 00:31:49.997 START TEST spdkcli_iscsi 00:31:49.997 ************************************ 00:31:49.997 09:13:56 spdkcli_iscsi -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:31:49.997 * Looking for test storage... 00:31:49.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:31:49.997 09:13:56 spdkcli_iscsi -- spdkcli/iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:31:49.998 09:13:56 spdkcli_iscsi -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:31:49.998 09:13:56 spdkcli_iscsi -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:31:49.998 09:13:56 spdkcli_iscsi -- spdkcli/iscsi.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:31:49.998 09:13:56 spdkcli_iscsi -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:31:49.998 09:13:56 spdkcli_iscsi -- spdkcli/iscsi.sh@12 -- # MATCH_FILE=spdkcli_iscsi.test 00:31:49.998 09:13:56 spdkcli_iscsi -- spdkcli/iscsi.sh@13 -- # SPDKCLI_BRANCH=/iscsi 00:31:49.998 09:13:56 spdkcli_iscsi -- spdkcli/iscsi.sh@15 -- # trap cleanup EXIT 00:31:49.998 09:13:56 spdkcli_iscsi -- spdkcli/iscsi.sh@17 -- # timing_enter run_iscsi_tgt 00:31:49.998 09:13:56 spdkcli_iscsi -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:49.998 09:13:56 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:31:49.998 09:13:56 spdkcli_iscsi -- spdkcli/iscsi.sh@21 -- # iscsi_tgt_pid=123821 00:31:49.998 09:13:56 spdkcli_iscsi -- spdkcli/iscsi.sh@22 -- # waitforlisten 123821 00:31:49.998 09:13:56 spdkcli_iscsi -- spdkcli/iscsi.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x3 -p 0 --wait-for-rpc 00:31:49.998 09:13:56 spdkcli_iscsi -- common/autotest_common.sh@831 -- # '[' -z 123821 ']' 00:31:49.998 09:13:56 spdkcli_iscsi -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.998 09:13:56 spdkcli_iscsi -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:49.998 09:13:56 spdkcli_iscsi -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.998 09:13:56 spdkcli_iscsi -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:49.998 09:13:56 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:31:49.998 [2024-07-25 09:13:57.093383] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:49.998 [2024-07-25 09:13:57.093531] [ DPDK EAL parameters: iscsi --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123821 ] 00:31:50.257 [2024-07-25 09:13:57.263705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:50.516 [2024-07-25 09:13:57.518858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.516 [2024-07-25 09:13:57.518925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.775 09:13:57 spdkcli_iscsi -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:50.775 09:13:57 spdkcli_iscsi -- common/autotest_common.sh@864 -- # return 0 00:31:50.775 09:13:57 spdkcli_iscsi -- spdkcli/iscsi.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:31:52.155 09:13:59 spdkcli_iscsi -- spdkcli/iscsi.sh@25 -- # timing_exit run_iscsi_tgt 00:31:52.155 09:13:59 spdkcli_iscsi -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:52.155 09:13:59 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:31:52.155 09:13:59 spdkcli_iscsi -- spdkcli/iscsi.sh@27 -- # timing_enter spdkcli_create_iscsi_config 00:31:52.155 09:13:59 spdkcli_iscsi -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:52.155 09:13:59 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:31:52.155 09:13:59 spdkcli_iscsi -- spdkcli/iscsi.sh@48 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc0'\'' '\''Malloc0'\'' True 00:31:52.155 '\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:52.155 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:52.155 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:52.155 '\''/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"'\'' '\''host=127.0.0.1, port=3261'\'' True 00:31:52.155 '\''/iscsi/portal_groups create 2 127.0.0.1:3262'\'' '\''host=127.0.0.1, port=3262'\'' True 00:31:52.155 '\''/iscsi/initiator_groups create 2 ANY 10.0.2.15/32'\'' '\''hostname=ANY, netmask=10.0.2.15/32'\'' True 00:31:52.155 '\''/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32'\'' '\''hostname=ANZ, netmask=10.0.2.15/32'\'' True 00:31:52.155 '\''/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32'\'' '\''hostname=ANW, netmask=10.0.2.16'\'' True 00:31:52.155 '\''/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1'\'' '\''Target0'\'' True 00:31:52.155 '\''/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1'\'' '\''Target1'\'' True 00:31:52.155 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' True 00:31:52.155 '\''/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2'\'' '\''Malloc3'\'' True 00:31:52.155 '\''/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"'\'' '\''user=test3'\'' True 00:31:52.155 '\''/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2'\'' '\''user=test2'\'' True 00:31:52.155 '\''/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"'\'' '\''user=test4'\'' True 00:31:52.155 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true'\'' '\''disable_chap: True'\'' True 00:31:52.155 '\''/iscsi/global_params set_auth g=1 d=true r=false'\'' '\''disable_chap: True'\'' True 00:31:52.155 '\''/iscsi ls'\'' '\''Malloc'\'' True 00:31:52.155 ' 00:32:00.279 Executing command: ['/bdevs/malloc create 32 512 Malloc0', 'Malloc0', True] 00:32:00.279 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:00.279 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:00.279 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:00.279 Executing command: ['/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"', 'host=127.0.0.1, port=3261', True] 00:32:00.279 Executing command: ['/iscsi/portal_groups create 2 127.0.0.1:3262', 'host=127.0.0.1, port=3262', True] 00:32:00.279 Executing command: ['/iscsi/initiator_groups create 2 ANY 10.0.2.15/32', 'hostname=ANY, netmask=10.0.2.15/32', True] 00:32:00.279 Executing command: ['/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32', 'hostname=ANZ, netmask=10.0.2.15/32', True] 00:32:00.279 Executing command: ['/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32', 'hostname=ANW, netmask=10.0.2.16', True] 00:32:00.279 Executing command: ['/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1', 'Target0', True] 00:32:00.279 Executing command: ['/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1', 'Target1', True] 00:32:00.279 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', True] 00:32:00.279 Executing command: ['/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2', 'Malloc3', True] 00:32:00.279 Executing command: ['/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"', 'user=test3', True] 00:32:00.279 Executing command: ['/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2', 'user=test2', True] 00:32:00.279 Executing command: ['/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"', 'user=test4', True] 00:32:00.279 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true', 'disable_chap: True', True] 00:32:00.279 Executing command: ['/iscsi/global_params set_auth g=1 d=true r=false', 'disable_chap: True', True] 00:32:00.279 Executing command: ['/iscsi ls', 'Malloc', True] 00:32:00.279 09:14:06 spdkcli_iscsi -- spdkcli/iscsi.sh@49 -- # timing_exit spdkcli_create_iscsi_config 00:32:00.279 09:14:06 spdkcli_iscsi -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:00.279 09:14:06 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:00.279 09:14:06 spdkcli_iscsi -- spdkcli/iscsi.sh@51 -- # timing_enter spdkcli_check_match 00:32:00.279 09:14:06 spdkcli_iscsi -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:00.279 09:14:06 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:00.279 09:14:06 spdkcli_iscsi -- spdkcli/iscsi.sh@52 -- # check_match 00:32:00.279 09:14:06 spdkcli_iscsi -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /iscsi 00:32:00.538 09:14:07 spdkcli_iscsi -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test.match 00:32:00.538 09:14:07 spdkcli_iscsi -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test 00:32:00.538 09:14:07 spdkcli_iscsi -- spdkcli/iscsi.sh@53 -- # timing_exit spdkcli_check_match 00:32:00.538 09:14:07 spdkcli_iscsi -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:00.538 09:14:07 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:00.538 09:14:07 spdkcli_iscsi -- spdkcli/iscsi.sh@55 -- # timing_enter spdkcli_clear_iscsi_config 00:32:00.538 09:14:07 spdkcli_iscsi -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:00.538 09:14:07 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:00.538 09:14:07 spdkcli_iscsi -- spdkcli/iscsi.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/iscsi/auth_groups delete_secret 1 test2'\'' '\''user=test2'\'' 00:32:00.538 '\''/iscsi/auth_groups delete_secret_all 1'\'' '\''user=test1'\'' 00:32:00.538 '\''/iscsi/auth_groups delete 1'\'' '\''user=test1'\'' 00:32:00.538 '\''/iscsi/auth_groups delete_all'\'' '\''user=test4'\'' 00:32:00.538 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' 00:32:00.538 '\''/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1'\'' '\''Target1'\'' 00:32:00.538 '\''/iscsi/target_nodes delete_all'\'' '\''Target0'\'' 00:32:00.538 '\''/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32'\'' '\''ANW'\'' 00:32:00.538 '\''/iscsi/initiator_groups delete 3'\'' '\''ANZ'\'' 00:32:00.538 '\''/iscsi/initiator_groups delete_all'\'' '\''ANY'\'' 00:32:00.538 '\''/iscsi/portal_groups delete 1'\'' '\''127.0.0.1:3261'\'' 00:32:00.538 '\''/iscsi/portal_groups delete_all'\'' '\''127.0.0.1:3262'\'' 00:32:00.538 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:00.538 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:00.538 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:00.538 '\''/bdevs/malloc delete Malloc0'\'' '\''Malloc0'\'' 00:32:00.538 ' 00:32:07.143 Executing command: ['/iscsi/auth_groups delete_secret 1 test2', 'user=test2', False] 00:32:07.143 Executing command: ['/iscsi/auth_groups delete_secret_all 1', 'user=test1', False] 00:32:07.143 Executing command: ['/iscsi/auth_groups delete 1', 'user=test1', False] 00:32:07.143 Executing command: ['/iscsi/auth_groups delete_all', 'user=test4', False] 00:32:07.143 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', False] 00:32:07.143 Executing command: ['/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1', 'Target1', False] 00:32:07.143 Executing command: ['/iscsi/target_nodes delete_all', 'Target0', False] 00:32:07.143 Executing command: ['/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32', 'ANW', False] 00:32:07.143 Executing command: ['/iscsi/initiator_groups delete 3', 'ANZ', False] 00:32:07.143 Executing command: ['/iscsi/initiator_groups delete_all', 'ANY', False] 00:32:07.143 Executing command: ['/iscsi/portal_groups delete 1', '127.0.0.1:3261', False] 00:32:07.143 Executing command: ['/iscsi/portal_groups delete_all', '127.0.0.1:3262', False] 00:32:07.143 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:07.143 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:07.143 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:07.143 Executing command: ['/bdevs/malloc delete Malloc0', 'Malloc0', False] 00:32:07.403 09:14:14 spdkcli_iscsi -- spdkcli/iscsi.sh@73 -- # timing_exit spdkcli_clear_iscsi_config 00:32:07.403 09:14:14 spdkcli_iscsi -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:07.403 09:14:14 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:07.403 09:14:14 spdkcli_iscsi -- spdkcli/iscsi.sh@75 -- # killprocess 123821 00:32:07.403 09:14:14 spdkcli_iscsi -- common/autotest_common.sh@950 -- # '[' -z 123821 ']' 00:32:07.403 09:14:14 spdkcli_iscsi -- common/autotest_common.sh@954 -- # kill -0 123821 00:32:07.403 09:14:14 spdkcli_iscsi -- common/autotest_common.sh@955 -- # uname 00:32:07.403 09:14:14 spdkcli_iscsi -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:07.403 09:14:14 spdkcli_iscsi -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123821 00:32:07.403 09:14:14 spdkcli_iscsi -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:07.403 09:14:14 spdkcli_iscsi -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:07.403 09:14:14 spdkcli_iscsi -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123821' 00:32:07.403 killing process with pid 123821 00:32:07.403 09:14:14 spdkcli_iscsi -- common/autotest_common.sh@969 -- # kill 123821 00:32:07.403 09:14:14 spdkcli_iscsi -- common/autotest_common.sh@974 -- # wait 123821 00:32:10.699 09:14:17 spdkcli_iscsi -- spdkcli/iscsi.sh@1 -- # cleanup 00:32:10.699 09:14:17 spdkcli_iscsi -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:10.699 09:14:17 spdkcli_iscsi -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:32:10.699 09:14:17 spdkcli_iscsi -- spdkcli/common.sh@16 -- # '[' -n 123821 ']' 00:32:10.699 09:14:17 spdkcli_iscsi -- spdkcli/common.sh@17 -- # killprocess 123821 00:32:10.699 09:14:17 spdkcli_iscsi -- common/autotest_common.sh@950 -- # '[' -z 123821 ']' 00:32:10.699 09:14:17 spdkcli_iscsi -- common/autotest_common.sh@954 -- # kill -0 123821 00:32:10.699 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (123821) - No such process 00:32:10.699 Process with pid 123821 is not found 00:32:10.699 09:14:17 spdkcli_iscsi -- common/autotest_common.sh@977 -- # echo 'Process with pid 123821 is not found' 00:32:10.699 09:14:17 spdkcli_iscsi -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:10.699 09:14:17 spdkcli_iscsi -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_iscsi.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:10.699 00:32:10.699 real 0m20.403s 00:32:10.699 user 0m42.693s 00:32:10.699 sys 0m1.341s 00:32:10.699 09:14:17 spdkcli_iscsi -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:10.699 09:14:17 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:10.699 ************************************ 00:32:10.699 END TEST spdkcli_iscsi 00:32:10.699 ************************************ 00:32:10.699 09:14:17 -- spdk/autotest.sh@271 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:32:10.699 09:14:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:10.699 09:14:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:10.699 09:14:17 -- common/autotest_common.sh@10 -- # set +x 00:32:10.699 ************************************ 00:32:10.699 START TEST spdkcli_raid 00:32:10.699 ************************************ 00:32:10.699 09:14:17 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:32:10.699 * Looking for test storage... 00:32:10.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:32:10.699 09:14:17 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:32:10.699 09:14:17 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:32:10.699 09:14:17 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:32:10.699 09:14:17 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:32:10.699 09:14:17 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:32:10.699 09:14:17 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:32:10.699 09:14:17 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:32:10.699 09:14:17 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:32:10.699 09:14:17 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:32:10.699 09:14:17 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:32:10.699 09:14:17 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:32:10.699 09:14:17 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:32:10.699 09:14:17 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:32:10.700 09:14:17 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:32:10.700 09:14:17 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:32:10.700 09:14:17 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:10.700 09:14:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:10.700 09:14:17 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:32:10.700 09:14:17 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=124153 00:32:10.700 09:14:17 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:32:10.700 09:14:17 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 124153 00:32:10.700 09:14:17 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 124153 ']' 00:32:10.700 09:14:17 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.700 09:14:17 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:10.700 09:14:17 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.700 09:14:17 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:10.700 09:14:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:10.700 [2024-07-25 09:14:17.593365] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:10.700 [2024-07-25 09:14:17.593634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124153 ] 00:32:10.700 [2024-07-25 09:14:17.767434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:10.959 [2024-07-25 09:14:18.029321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.959 [2024-07-25 09:14:18.029381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.338 09:14:19 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:12.338 09:14:19 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:32:12.338 09:14:19 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:32:12.338 09:14:19 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:12.338 09:14:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:12.338 09:14:19 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:32:12.338 09:14:19 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:12.338 09:14:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:12.338 09:14:19 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:12.338 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:12.338 ' 00:32:13.719 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:32:13.719 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:32:13.719 09:14:20 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:32:13.719 09:14:20 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:13.719 09:14:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:13.719 09:14:20 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:32:13.719 09:14:20 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:13.719 09:14:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:13.719 09:14:20 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:32:13.719 ' 00:32:15.122 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:32:15.122 09:14:21 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:32:15.122 09:14:21 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:15.122 09:14:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:15.122 09:14:21 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:32:15.122 09:14:21 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:15.122 09:14:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:15.122 09:14:22 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:32:15.122 09:14:22 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:32:15.689 09:14:22 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:32:15.689 09:14:22 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:32:15.689 09:14:22 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:32:15.689 09:14:22 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:15.689 09:14:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:15.689 09:14:22 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:32:15.689 09:14:22 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:15.689 09:14:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:15.689 09:14:22 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:32:15.689 ' 00:32:16.638 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:32:16.638 09:14:23 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:32:16.638 09:14:23 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:16.638 09:14:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:16.897 09:14:23 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:32:16.897 09:14:23 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:16.897 09:14:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:16.897 09:14:23 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:32:16.897 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:32:16.897 ' 00:32:18.277 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:32:18.277 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:32:18.277 09:14:25 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:32:18.277 09:14:25 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:18.277 09:14:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:18.277 09:14:25 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 124153 00:32:18.277 09:14:25 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 124153 ']' 00:32:18.277 09:14:25 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 124153 00:32:18.277 09:14:25 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:32:18.277 09:14:25 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:18.277 09:14:25 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124153 00:32:18.277 09:14:25 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:18.277 09:14:25 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:18.277 09:14:25 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124153' 00:32:18.277 killing process with pid 124153 00:32:18.277 09:14:25 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 124153 00:32:18.277 09:14:25 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 124153 00:32:21.567 09:14:28 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:32:21.567 09:14:28 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 124153 ']' 00:32:21.567 09:14:28 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 124153 00:32:21.567 09:14:28 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 124153 ']' 00:32:21.567 09:14:28 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 124153 00:32:21.567 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (124153) - No such process 00:32:21.567 09:14:28 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 124153 is not found' 00:32:21.567 Process with pid 124153 is not found 00:32:21.567 09:14:28 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:32:21.567 09:14:28 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:21.567 09:14:28 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:21.567 09:14:28 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:21.567 00:32:21.567 real 0m10.916s 00:32:21.567 user 0m22.044s 00:32:21.567 sys 0m1.158s 00:32:21.567 ************************************ 00:32:21.567 END TEST spdkcli_raid 00:32:21.567 ************************************ 00:32:21.567 09:14:28 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:21.567 09:14:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:21.567 09:14:28 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:32:21.567 09:14:28 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:32:21.567 09:14:28 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:21.567 09:14:28 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:21.567 09:14:28 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:32:21.567 09:14:28 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:32:21.567 09:14:28 -- spdk/autotest.sh@334 -- # '[' 1 -eq 1 ']' 00:32:21.567 09:14:28 -- spdk/autotest.sh@335 -- # run_test blockdev_rbd /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh rbd 00:32:21.567 09:14:28 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:21.567 09:14:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:21.567 09:14:28 -- common/autotest_common.sh@10 -- # set +x 00:32:21.567 ************************************ 00:32:21.567 START TEST blockdev_rbd 00:32:21.567 ************************************ 00:32:21.567 09:14:28 blockdev_rbd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh rbd 00:32:21.567 * Looking for test storage... 00:32:21.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:32:21.567 09:14:28 blockdev_rbd -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:32:21.567 09:14:28 blockdev_rbd -- bdev/nbd_common.sh@6 -- # set -e 00:32:21.567 09:14:28 blockdev_rbd -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:32:21.567 09:14:28 blockdev_rbd -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:21.567 09:14:28 blockdev_rbd -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:32:21.567 09:14:28 blockdev_rbd -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:32:21.567 09:14:28 blockdev_rbd -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:32:21.567 09:14:28 blockdev_rbd -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:32:21.567 09:14:28 blockdev_rbd -- bdev/blockdev.sh@20 -- # : 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@673 -- # uname -s 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@681 -- # test_type=rbd 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@682 -- # crypto_device= 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@683 -- # dek= 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@684 -- # env_ctx= 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@689 -- # [[ rbd == bdev ]] 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@689 -- # [[ rbd == crypto_* ]] 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=124427 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@49 -- # waitforlisten 124427 00:32:21.568 09:14:28 blockdev_rbd -- common/autotest_common.sh@831 -- # '[' -z 124427 ']' 00:32:21.568 09:14:28 blockdev_rbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.568 09:14:28 blockdev_rbd -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:32:21.568 09:14:28 blockdev_rbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:21.568 09:14:28 blockdev_rbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.568 09:14:28 blockdev_rbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:21.568 09:14:28 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:21.568 [2024-07-25 09:14:28.564976] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:21.568 [2024-07-25 09:14:28.565287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124427 ] 00:32:21.826 [2024-07-25 09:14:28.732163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.084 [2024-07-25 09:14:29.002704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.019 09:14:30 blockdev_rbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:23.019 09:14:30 blockdev_rbd -- common/autotest_common.sh@864 -- # return 0 00:32:23.019 09:14:30 blockdev_rbd -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:32:23.019 09:14:30 blockdev_rbd -- bdev/blockdev.sh@719 -- # setup_rbd_conf 00:32:23.019 09:14:30 blockdev_rbd -- bdev/blockdev.sh@260 -- # timing_enter rbd_setup 00:32:23.019 09:14:30 blockdev_rbd -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:23.019 09:14:30 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:23.019 09:14:30 blockdev_rbd -- bdev/blockdev.sh@261 -- # rbd_setup 127.0.0.1 00:32:23.019 09:14:30 blockdev_rbd -- common/autotest_common.sh@1007 -- # '[' -z 127.0.0.1 ']' 00:32:23.019 09:14:30 blockdev_rbd -- common/autotest_common.sh@1011 -- # '[' -n '' ']' 00:32:23.019 09:14:30 blockdev_rbd -- common/autotest_common.sh@1020 -- # hash ceph 00:32:23.019 09:14:30 blockdev_rbd -- common/autotest_common.sh@1021 -- # export PG_NUM=128 00:32:23.019 09:14:30 blockdev_rbd -- common/autotest_common.sh@1021 -- # PG_NUM=128 00:32:23.019 09:14:30 blockdev_rbd -- common/autotest_common.sh@1022 -- # export RBD_POOL=rbd 00:32:23.019 09:14:30 blockdev_rbd -- common/autotest_common.sh@1022 -- # RBD_POOL=rbd 00:32:23.019 09:14:30 blockdev_rbd -- common/autotest_common.sh@1023 -- # export RBD_NAME=foo 00:32:23.019 09:14:30 blockdev_rbd -- common/autotest_common.sh@1023 -- # RBD_NAME=foo 00:32:23.019 09:14:30 blockdev_rbd -- common/autotest_common.sh@1024 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:32:23.019 + base_dir=/var/tmp/ceph 00:32:23.019 + image=/var/tmp/ceph/ceph_raw.img 00:32:23.019 + dev=/dev/loop200 00:32:23.019 + pkill -9 ceph 00:32:23.019 + sleep 3 00:32:26.307 + umount /dev/loop200p2 00:32:26.307 umount: /dev/loop200p2: no mount point specified. 00:32:26.307 + losetup -d /dev/loop200 00:32:26.307 losetup: /dev/loop200: detach failed: No such device or address 00:32:26.307 + rm -rf /var/tmp/ceph 00:32:26.307 09:14:33 blockdev_rbd -- common/autotest_common.sh@1025 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 127.0.0.1 00:32:26.307 + set -e 00:32:26.307 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:32:26.307 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:32:26.307 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:32:26.307 + base_dir=/var/tmp/ceph 00:32:26.307 + mon_ip=127.0.0.1 00:32:26.307 + mon_dir=/var/tmp/ceph/mon.a 00:32:26.307 + pid_dir=/var/tmp/ceph/pid 00:32:26.307 + ceph_conf=/var/tmp/ceph/ceph.conf 00:32:26.307 + mnt_dir=/var/tmp/ceph/mnt 00:32:26.307 + image=/var/tmp/ceph_raw.img 00:32:26.307 + dev=/dev/loop200 00:32:26.307 + modprobe loop 00:32:26.307 + umount /dev/loop200p2 00:32:26.307 umount: /dev/loop200p2: no mount point specified. 00:32:26.307 + true 00:32:26.307 + losetup -d /dev/loop200 00:32:26.307 losetup: /dev/loop200: detach failed: No such device or address 00:32:26.307 + true 00:32:26.307 + '[' -d /var/tmp/ceph ']' 00:32:26.307 + mkdir /var/tmp/ceph 00:32:26.307 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:32:26.307 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:32:26.307 + fallocate -l 4G /var/tmp/ceph_raw.img 00:32:26.307 + mknod /dev/loop200 b 7 200 00:32:26.307 mknod: /dev/loop200: File exists 00:32:26.307 + true 00:32:26.307 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:32:26.307 Partitioning /dev/loop200 00:32:26.307 + PARTED='parted -s' 00:32:26.307 + SGDISK=sgdisk 00:32:26.307 + echo 'Partitioning /dev/loop200' 00:32:26.307 + parted -s /dev/loop200 mktable gpt 00:32:26.307 + sleep 2 00:32:28.838 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:32:28.838 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:32:28.838 Setting name on /dev/loop200 00:32:28.838 + partno=0 00:32:28.838 + echo 'Setting name on /dev/loop200' 00:32:28.838 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:32:29.405 Warning: The kernel is still using the old partition table. 00:32:29.405 The new table will be used at the next reboot or after you 00:32:29.405 run partprobe(8) or kpartx(8) 00:32:29.405 The operation has completed successfully. 00:32:29.405 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:32:30.783 Warning: The kernel is still using the old partition table. 00:32:30.783 The new table will be used at the next reboot or after you 00:32:30.783 run partprobe(8) or kpartx(8) 00:32:30.783 The operation has completed successfully. 00:32:30.783 + kpartx /dev/loop200 00:32:30.783 loop200p1 : 0 4192256 /dev/loop200 2048 00:32:30.783 loop200p2 : 0 4192256 /dev/loop200 4194304 00:32:30.783 ++ ceph -v 00:32:30.783 ++ awk '{print $3}' 00:32:30.783 + ceph_version=17.2.7 00:32:30.783 + ceph_maj=17 00:32:30.783 + '[' 17 -gt 12 ']' 00:32:30.783 + update_config=true 00:32:30.783 + rm -f /var/log/ceph/ceph-mon.a.log 00:32:30.783 + set_min_mon_release='--set-min-mon-release 14' 00:32:30.783 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:32:30.783 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:32:30.783 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:32:30.783 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:32:30.783 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:32:30.783 = sectsz=512 attr=2, projid32bit=1 00:32:30.783 = crc=1 finobt=1, sparse=1, rmapbt=0 00:32:30.783 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:32:30.783 data = bsize=4096 blocks=524032, imaxpct=25 00:32:30.783 = sunit=0 swidth=0 blks 00:32:30.783 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:32:30.783 log =internal log bsize=4096 blocks=16384, version=2 00:32:30.783 = sectsz=512 sunit=0 blks, lazy-count=1 00:32:30.783 realtime =none extsz=4096 blocks=0, rtextents=0 00:32:30.783 Discarding blocks...Done. 00:32:30.783 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:32:30.783 + cat 00:32:30.783 + rm -rf '/var/tmp/ceph/mon.a/*' 00:32:30.783 + mkdir -p /var/tmp/ceph/mon.a 00:32:30.783 + mkdir -p /var/tmp/ceph/pid 00:32:30.783 + rm -f /etc/ceph/ceph.client.admin.keyring 00:32:30.783 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:32:30.783 creating /var/tmp/ceph/keyring 00:32:30.783 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:32:30.783 + monmaptool --create --clobber --add a 127.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:32:30.783 monmaptool: monmap file /var/tmp/ceph/monmap 00:32:30.783 monmaptool: generated fsid 04b3e838-f150-4fa0-986c-c046ae4ab4af 00:32:30.783 setting min_mon_release = octopus 00:32:30.783 epoch 0 00:32:30.783 fsid 04b3e838-f150-4fa0-986c-c046ae4ab4af 00:32:30.783 last_changed 2024-07-25T09:14:37.813521+0000 00:32:30.783 created 2024-07-25T09:14:37.813521+0000 00:32:30.783 min_mon_release 15 (octopus) 00:32:30.783 election_strategy: 1 00:32:30.783 0: v2:127.0.0.1:12046/0 mon.a 00:32:30.783 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:32:30.783 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:32:31.042 + '[' true = true ']' 00:32:31.042 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:32:31.042 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:32:31.042 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:32:31.042 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:32:31.042 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:32:31.042 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:32:31.042 ++ hostname 00:32:31.042 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:32:31.042 + true 00:32:31.042 + '[' true = true ']' 00:32:31.042 + ceph-conf --name mon.a --show-config-value log_file 00:32:31.042 /var/log/ceph/ceph-mon.a.log 00:32:31.042 ++ ceph -s 00:32:31.042 ++ grep id 00:32:31.042 ++ awk '{print $2}' 00:32:31.301 + fsid=04b3e838-f150-4fa0-986c-c046ae4ab4af 00:32:31.301 + sed -i 's/perf = true/perf = true\n\tfsid = 04b3e838-f150-4fa0-986c-c046ae4ab4af \n/g' /var/tmp/ceph/ceph.conf 00:32:31.301 + (( ceph_maj < 18 )) 00:32:31.301 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:32:31.301 + cat /var/tmp/ceph/ceph.conf 00:32:31.301 [global] 00:32:31.301 debug_lockdep = 0/0 00:32:31.301 debug_context = 0/0 00:32:31.301 debug_crush = 0/0 00:32:31.301 debug_buffer = 0/0 00:32:31.301 debug_timer = 0/0 00:32:31.301 debug_filer = 0/0 00:32:31.301 debug_objecter = 0/0 00:32:31.301 debug_rados = 0/0 00:32:31.301 debug_rbd = 0/0 00:32:31.301 debug_ms = 0/0 00:32:31.301 debug_monc = 0/0 00:32:31.301 debug_tp = 0/0 00:32:31.301 debug_auth = 0/0 00:32:31.301 debug_finisher = 0/0 00:32:31.301 debug_heartbeatmap = 0/0 00:32:31.301 debug_perfcounter = 0/0 00:32:31.301 debug_asok = 0/0 00:32:31.301 debug_throttle = 0/0 00:32:31.301 debug_mon = 0/0 00:32:31.301 debug_paxos = 0/0 00:32:31.301 debug_rgw = 0/0 00:32:31.301 00:32:31.301 perf = true 00:32:31.301 osd objectstore = filestore 00:32:31.301 00:32:31.301 fsid = 04b3e838-f150-4fa0-986c-c046ae4ab4af 00:32:31.301 00:32:31.301 mutex_perf_counter = false 00:32:31.301 throttler_perf_counter = false 00:32:31.301 rbd cache = false 00:32:31.301 mon_allow_pool_delete = true 00:32:31.301 00:32:31.301 osd_pool_default_size = 1 00:32:31.301 00:32:31.301 [mon] 00:32:31.301 mon_max_pool_pg_num=166496 00:32:31.301 mon_osd_max_split_count = 10000 00:32:31.301 mon_pg_warn_max_per_osd = 10000 00:32:31.301 00:32:31.301 [osd] 00:32:31.301 osd_op_threads = 64 00:32:31.301 filestore_queue_max_ops=5000 00:32:31.301 filestore_queue_committing_max_ops=5000 00:32:31.301 journal_max_write_entries=1000 00:32:31.301 journal_queue_max_ops=3000 00:32:31.302 objecter_inflight_ops=102400 00:32:31.302 filestore_wbthrottle_enable=false 00:32:31.302 filestore_queue_max_bytes=1048576000 00:32:31.302 filestore_queue_committing_max_bytes=1048576000 00:32:31.302 journal_max_write_bytes=1048576000 00:32:31.302 journal_queue_max_bytes=1048576000 00:32:31.302 ms_dispatch_throttle_bytes=1048576000 00:32:31.302 objecter_inflight_op_bytes=1048576000 00:32:31.302 filestore_max_sync_interval=10 00:32:31.302 osd_client_message_size_cap = 0 00:32:31.302 osd_client_message_cap = 0 00:32:31.302 osd_enable_op_tracker = false 00:32:31.302 filestore_fd_cache_size = 10240 00:32:31.302 filestore_fd_cache_shards = 64 00:32:31.302 filestore_op_threads = 16 00:32:31.302 osd_op_num_shards = 48 00:32:31.302 osd_op_num_threads_per_shard = 2 00:32:31.302 osd_pg_object_context_cache_count = 10240 00:32:31.302 filestore_odsync_write = True 00:32:31.302 journal_dynamic_throttle = True 00:32:31.302 00:32:31.302 [osd.0] 00:32:31.302 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:32:31.302 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:32:31.302 00:32:31.302 # add mon address 00:32:31.302 [mon.a] 00:32:31.302 mon addr = v2:127.0.0.1:12046 00:32:31.302 + i=0 00:32:31.302 + mkdir -p /var/tmp/ceph/mnt 00:32:31.302 ++ uuidgen 00:32:31.302 + uuid=9a5be60d-0e9d-465e-ab87-51ce4cf99e6a 00:32:31.302 + ceph -c /var/tmp/ceph/ceph.conf osd create 9a5be60d-0e9d-465e-ab87-51ce4cf99e6a 0 00:32:31.560 0 00:32:31.560 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid 9a5be60d-0e9d-465e-ab87-51ce4cf99e6a --check-needs-journal --no-mon-config 00:32:31.819 2024-07-25T09:14:38.694+0000 7f5a4668d400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:32:31.819 2024-07-25T09:14:38.695+0000 7f5a4668d400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:32:31.819 2024-07-25T09:14:38.744+0000 7f5a4668d400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 9a5be60d-0e9d-465e-ab87-51ce4cf99e6a, invalid (someone else's?) journal 00:32:31.819 2024-07-25T09:14:38.773+0000 7f5a4668d400 -1 journal do_read_entry(4096): bad header magic 00:32:31.819 2024-07-25T09:14:38.773+0000 7f5a4668d400 -1 journal do_read_entry(4096): bad header magic 00:32:31.819 ++ hostname 00:32:31.819 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:32:33.200 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:32:33.200 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:32:33.460 added key for osd.0 00:32:33.460 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:32:33.719 + class_dir=/lib64/rados-classes 00:32:33.719 + [[ -e /lib64/rados-classes ]] 00:32:33.719 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:32:33.977 + pkill -9 ceph-osd 00:32:33.977 + true 00:32:33.977 + sleep 2 00:32:35.884 + mkdir -p /var/tmp/ceph/pid 00:32:35.884 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:32:36.143 2024-07-25T09:14:43.055+0000 7fdf04670400 -1 Falling back to public interface 00:32:36.143 2024-07-25T09:14:43.104+0000 7fdf04670400 -1 journal do_read_entry(8192): bad header magic 00:32:36.143 2024-07-25T09:14:43.104+0000 7fdf04670400 -1 journal do_read_entry(8192): bad header magic 00:32:36.143 2024-07-25T09:14:43.112+0000 7fdf04670400 -1 osd.0 0 log_to_monitors true 00:32:37.085 09:14:44 blockdev_rbd -- common/autotest_common.sh@1027 -- # ceph osd pool create rbd 128 00:32:38.019 pool 'rbd' created 00:32:38.276 09:14:45 blockdev_rbd -- common/autotest_common.sh@1028 -- # rbd create foo --size 1000 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@262 -- # timing_exit rbd_setup 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@264 -- # rpc_cmd bdev_rbd_create -b Ceph0 rbd foo 512 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:41.583 [2024-07-25 09:14:48.403259] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:32:41.583 WARNING:bdev_rbd_create should be used with specifying -c to have a cluster name after bdev_rbd_register_cluster. 00:32:41.583 Ceph0 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@739 -- # cat 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@748 -- # jq -r .name 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "3de8dd57-8f47-40e8-b0a4-760f5380a7e2"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "3de8dd57-8f47-40e8-b0a4-760f5380a7e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@751 -- # hello_world_bdev=Ceph0 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:32:41.583 09:14:48 blockdev_rbd -- bdev/blockdev.sh@753 -- # killprocess 124427 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@950 -- # '[' -z 124427 ']' 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@954 -- # kill -0 124427 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@955 -- # uname 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124427 00:32:41.583 killing process with pid 124427 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124427' 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@969 -- # kill 124427 00:32:41.583 09:14:48 blockdev_rbd -- common/autotest_common.sh@974 -- # wait 124427 00:32:44.906 09:14:51 blockdev_rbd -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:44.906 09:14:51 blockdev_rbd -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Ceph0 '' 00:32:44.906 09:14:51 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:32:44.906 09:14:51 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:44.906 09:14:51 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:44.906 ************************************ 00:32:44.906 START TEST bdev_hello_world 00:32:44.906 ************************************ 00:32:44.906 09:14:51 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Ceph0 '' 00:32:44.906 [2024-07-25 09:14:51.884940] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:44.906 [2024-07-25 09:14:51.885127] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125334 ] 00:32:45.165 [2024-07-25 09:14:52.060811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.425 [2024-07-25 09:14:52.368580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.995 [2024-07-25 09:14:52.950771] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:32:45.995 [2024-07-25 09:14:52.967550] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:32:45.995 [2024-07-25 09:14:52.967630] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Ceph0 00:32:45.995 [2024-07-25 09:14:52.967665] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:32:45.995 [2024-07-25 09:14:52.971169] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:32:45.995 [2024-07-25 09:14:52.987357] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:32:45.995 [2024-07-25 09:14:52.987428] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:32:45.995 [2024-07-25 09:14:52.992655] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:32:45.995 00:32:45.995 [2024-07-25 09:14:52.992718] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:32:47.900 00:32:47.900 real 0m2.828s 00:32:47.900 user 0m2.339s 00:32:47.900 sys 0m0.370s 00:32:47.900 09:14:54 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:47.900 09:14:54 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:32:47.900 ************************************ 00:32:47.900 END TEST bdev_hello_world 00:32:47.900 ************************************ 00:32:47.900 09:14:54 blockdev_rbd -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:32:47.900 09:14:54 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:47.900 09:14:54 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:47.900 09:14:54 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:47.900 ************************************ 00:32:47.900 START TEST bdev_bounds 00:32:47.900 ************************************ 00:32:47.900 09:14:54 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:32:47.900 Process bdevio pid: 125396 00:32:47.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.900 09:14:54 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=125396 00:32:47.900 09:14:54 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:32:47.900 09:14:54 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 125396' 00:32:47.900 09:14:54 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:32:47.900 09:14:54 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 125396 00:32:47.900 09:14:54 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 125396 ']' 00:32:47.900 09:14:54 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.900 09:14:54 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:47.900 09:14:54 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.900 09:14:54 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:47.900 09:14:54 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:32:47.901 [2024-07-25 09:14:54.786632] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:47.901 [2024-07-25 09:14:54.786786] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125396 ] 00:32:47.901 [2024-07-25 09:14:54.956050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:48.160 [2024-07-25 09:14:55.233073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.160 [2024-07-25 09:14:55.233173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.160 [2024-07-25 09:14:55.233210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:48.735 [2024-07-25 09:14:55.772453] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:32:48.735 09:14:55 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:48.735 09:14:55 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:32:48.735 09:14:55 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:32:48.993 I/O targets: 00:32:48.993 Ceph0: 2048000 blocks of 512 bytes (1000 MiB) 00:32:48.993 00:32:48.993 00:32:48.993 CUnit - A unit testing framework for C - Version 2.1-3 00:32:48.993 http://cunit.sourceforge.net/ 00:32:48.993 00:32:48.993 00:32:48.993 Suite: bdevio tests on: Ceph0 00:32:48.993 Test: blockdev write read block ...passed 00:32:48.993 Test: blockdev write zeroes read block ...passed 00:32:48.993 Test: blockdev write zeroes read no split ...passed 00:32:48.993 Test: blockdev write zeroes read split ...passed 00:32:48.993 Test: blockdev write zeroes read split partial ...passed 00:32:48.993 Test: blockdev reset ...passed 00:32:48.993 Test: blockdev write read 8 blocks ...passed 00:32:48.993 Test: blockdev write read size > 128k ...passed 00:32:48.993 Test: blockdev write read invalid size ...passed 00:32:48.993 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:48.993 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:48.993 Test: blockdev write read max offset ...passed 00:32:48.993 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:49.252 Test: blockdev writev readv 8 blocks ...passed 00:32:49.252 Test: blockdev writev readv 30 x 1block ...passed 00:32:49.252 Test: blockdev writev readv block ...passed 00:32:49.252 Test: blockdev writev readv size > 128k ...passed 00:32:49.252 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:49.252 Test: blockdev comparev and writev ...passed 00:32:49.252 Test: blockdev nvme passthru rw ...passed 00:32:49.252 Test: blockdev nvme passthru vendor specific ...passed 00:32:49.252 Test: blockdev nvme admin passthru ...passed 00:32:49.252 Test: blockdev copy ...passed 00:32:49.252 00:32:49.252 Run Summary: Type Total Ran Passed Failed Inactive 00:32:49.252 suites 1 1 n/a 0 0 00:32:49.252 tests 23 23 23 0 0 00:32:49.252 asserts 130 130 130 0 n/a 00:32:49.252 00:32:49.252 Elapsed time = 0.615 seconds 00:32:49.252 0 00:32:49.252 09:14:56 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 125396 00:32:49.252 09:14:56 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 125396 ']' 00:32:49.252 09:14:56 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 125396 00:32:49.252 09:14:56 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:32:49.252 09:14:56 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:49.252 09:14:56 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 125396 00:32:49.252 killing process with pid 125396 00:32:49.252 09:14:56 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:49.252 09:14:56 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:49.252 09:14:56 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 125396' 00:32:49.253 09:14:56 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@969 -- # kill 125396 00:32:49.253 09:14:56 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@974 -- # wait 125396 00:32:51.157 09:14:57 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:32:51.157 00:32:51.157 real 0m3.188s 00:32:51.157 user 0m7.226s 00:32:51.157 sys 0m0.443s 00:32:51.157 ************************************ 00:32:51.157 END TEST bdev_bounds 00:32:51.157 ************************************ 00:32:51.157 09:14:57 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:51.157 09:14:57 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:32:51.157 09:14:57 blockdev_rbd -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Ceph0 '' 00:32:51.157 09:14:57 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:51.157 09:14:57 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:51.157 09:14:57 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:51.157 ************************************ 00:32:51.157 START TEST bdev_nbd 00:32:51.157 ************************************ 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Ceph0 '' 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Ceph0') 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Ceph0') 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=125485 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 125485 /var/tmp/spdk-nbd.sock 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 125485 ']' 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:32:51.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:51.157 09:14:57 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:32:51.157 [2024-07-25 09:14:57.994873] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:51.157 [2024-07-25 09:14:57.995113] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.157 [2024-07-25 09:14:58.161803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.417 [2024-07-25 09:14:58.430826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.986 [2024-07-25 09:14:58.964189] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:32:51.986 09:14:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:51.986 09:14:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:32:51.986 09:14:59 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Ceph0 00:32:51.986 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:51.986 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Ceph0') 00:32:51.986 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:32:51.986 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Ceph0 00:32:51.986 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:51.986 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Ceph0') 00:32:51.986 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:32:51.986 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:32:51.986 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:32:51.986 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:32:51.986 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:32:51.986 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Ceph0 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:52.246 1+0 records in 00:32:52.246 1+0 records out 00:32:52.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000968899 s, 4.2 MB/s 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:32:52.246 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:52.506 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:32:52.506 { 00:32:52.506 "nbd_device": "/dev/nbd0", 00:32:52.506 "bdev_name": "Ceph0" 00:32:52.506 } 00:32:52.506 ]' 00:32:52.506 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:32:52.506 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:32:52.506 { 00:32:52.506 "nbd_device": "/dev/nbd0", 00:32:52.506 "bdev_name": "Ceph0" 00:32:52.506 } 00:32:52.506 ]' 00:32:52.506 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:32:52.506 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:52.506 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:52.506 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:52.506 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:52.506 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:52.506 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:52.506 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:52.765 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:52.765 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:52.765 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:52.765 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:52.765 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:52.765 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:52.765 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:52.765 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:52.765 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:52.765 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:52.765 09:14:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Ceph0 /dev/nbd0 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Ceph0') 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Ceph0 /dev/nbd0 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Ceph0') 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:53.025 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Ceph0 /dev/nbd0 00:32:53.285 /dev/nbd0 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:53.285 1+0 records in 00:32:53.285 1+0 records out 00:32:53.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00152348 s, 2.7 MB/s 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:53.285 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:53.544 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:53.544 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:32:53.544 { 00:32:53.544 "nbd_device": "/dev/nbd0", 00:32:53.544 "bdev_name": "Ceph0" 00:32:53.544 } 00:32:53.544 ]' 00:32:53.544 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:32:53.544 { 00:32:53.544 "nbd_device": "/dev/nbd0", 00:32:53.544 "bdev_name": "Ceph0" 00:32:53.544 } 00:32:53.544 ]' 00:32:53.544 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:53.803 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:32:53.803 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:32:53.803 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:53.803 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:32:53.803 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:32:53.803 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:32:53.803 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:32:53.803 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:32:53.803 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:32:53.803 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:53.803 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:32:53.803 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:32:53.803 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:32:53.803 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:32:53.803 256+0 records in 00:32:53.803 256+0 records out 00:32:53.803 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127434 s, 82.3 MB/s 00:32:53.803 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:53.803 09:15:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:32:55.179 256+0 records in 00:32:55.179 256+0 records out 00:32:55.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 1.30123 s, 806 kB/s 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:55.179 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:32:55.439 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:32:55.698 malloc_lvol_verify 00:32:55.698 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:32:55.960 8ae520f3-951c-413d-bc47-eed0be78deba 00:32:55.960 09:15:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:32:56.223 9a9a54f9-2186-4147-ad71-09e48d621369 00:32:56.223 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:32:56.482 /dev/nbd0 00:32:56.482 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:32:56.482 mke2fs 1.46.5 (30-Dec-2021) 00:32:56.482 Discarding device blocks: 0/4096 done 00:32:56.482 Creating filesystem with 4096 1k blocks and 1024 inodes 00:32:56.482 00:32:56.482 Allocating group tables: 0/1 done 00:32:56.482 Writing inode tables: 0/1 done 00:32:56.482 Creating journal (1024 blocks): done 00:32:56.482 Writing superblocks and filesystem accounting information: 0/1 done 00:32:56.482 00:32:56.482 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:32:56.482 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:56.482 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:56.482 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:56.482 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:56.482 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:56.482 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:56.482 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 125485 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 125485 ']' 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 125485 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 125485 00:32:56.742 09:15:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:56.743 09:15:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:56.743 killing process with pid 125485 00:32:56.743 09:15:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 125485' 00:32:56.743 09:15:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@969 -- # kill 125485 00:32:56.743 09:15:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@974 -- # wait 125485 00:32:58.651 09:15:05 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:32:58.651 00:32:58.651 real 0m7.399s 00:32:58.651 user 0m9.286s 00:32:58.651 sys 0m1.833s 00:32:58.651 09:15:05 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:58.651 09:15:05 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:32:58.651 ************************************ 00:32:58.651 END TEST bdev_nbd 00:32:58.651 ************************************ 00:32:58.651 09:15:05 blockdev_rbd -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:32:58.651 09:15:05 blockdev_rbd -- bdev/blockdev.sh@763 -- # '[' rbd = nvme ']' 00:32:58.651 09:15:05 blockdev_rbd -- bdev/blockdev.sh@763 -- # '[' rbd = gpt ']' 00:32:58.651 09:15:05 blockdev_rbd -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:32:58.651 09:15:05 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:58.651 09:15:05 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:58.651 09:15:05 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:58.651 ************************************ 00:32:58.651 START TEST bdev_fio 00:32:58.651 ************************************ 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:32:58.651 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Ceph0]' 00:32:58.651 09:15:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Ceph0 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:58.652 ************************************ 00:32:58.652 START TEST bdev_fio_rw_verify 00:32:58.652 ************************************ 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:58.652 09:15:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:58.652 job_Ceph0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:58.652 fio-3.35 00:32:58.652 Starting 1 thread 00:33:10.854 00:33:10.854 job_Ceph0: (groupid=0, jobs=1): err= 0: pid=125736: Thu Jul 25 09:15:16 2024 00:33:10.854 read: IOPS=511, BW=2047KiB/s (2096kB/s)(20.0MiB/10004msec) 00:33:10.854 slat (usec): min=4, max=379, avg=14.72, stdev=13.52 00:33:10.854 clat (usec): min=239, max=447388, avg=3888.29, stdev=27566.68 00:33:10.854 lat (usec): min=252, max=447412, avg=3903.02, stdev=27566.80 00:33:10.854 clat percentiles (usec): 00:33:10.854 | 50.000th=[ 1074], 99.000th=[ 61080], 99.900th=[446694], 00:33:10.854 | 99.990th=[446694], 99.999th=[446694] 00:33:10.854 write: IOPS=536, BW=2147KiB/s (2199kB/s)(21.0MiB/10004msec); 0 zone resets 00:33:10.854 slat (usec): min=17, max=1485, avg=49.76, stdev=48.42 00:33:10.854 clat (msec): min=2, max=252, avg=11.11, stdev=22.64 00:33:10.854 lat (msec): min=2, max=252, avg=11.16, stdev=22.65 00:33:10.854 clat percentiles (msec): 00:33:10.854 | 50.000th=[ 7], 99.000th=[ 124], 99.900th=[ 207], 99.990th=[ 253], 00:33:10.854 | 99.999th=[ 253] 00:33:10.854 bw ( KiB/s): min= 432, max= 5288, per=100.00%, avg=2158.74, stdev=1569.31, samples=19 00:33:10.854 iops : min= 108, max= 1322, avg=539.68, stdev=392.33, samples=19 00:33:10.854 lat (usec) : 250=0.02%, 500=0.54%, 750=5.39%, 1000=13.63% 00:33:10.854 lat (msec) : 2=25.93%, 4=6.88%, 10=42.12%, 20=2.06%, 50=0.40% 00:33:10.854 lat (msec) : 100=1.39%, 250=1.37%, 500=0.27% 00:33:10.854 cpu : usr=97.60%, sys=1.09%, ctx=500, majf=0, minf=14301 00:33:10.854 IO depths : 1=0.1%, 2=0.1%, 4=7.9%, 8=92.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:10.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.854 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.854 issued rwts: total=5120,5370,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.854 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:10.854 00:33:10.854 Run status group 0 (all jobs): 00:33:10.854 READ: bw=2047KiB/s (2096kB/s), 2047KiB/s-2047KiB/s (2096kB/s-2096kB/s), io=20.0MiB (21.0MB), run=10004-10004msec 00:33:10.854 WRITE: bw=2147KiB/s (2199kB/s), 2147KiB/s-2147KiB/s (2199kB/s-2199kB/s), io=21.0MiB (22.0MB), run=10004-10004msec 00:33:11.421 ----------------------------------------------------- 00:33:11.421 Suppressions used: 00:33:11.421 count bytes template 00:33:11.421 1 6 /usr/src/fio/parse.c 00:33:11.421 243 23328 /usr/src/fio/iolog.c 00:33:11.421 1 8 libtcmalloc_minimal.so 00:33:11.421 1 904 libcrypto.so 00:33:11.421 ----------------------------------------------------- 00:33:11.421 00:33:11.421 00:33:11.421 real 0m13.024s 00:33:11.421 user 0m13.795s 00:33:11.421 sys 0m1.778s 00:33:11.421 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:11.421 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:33:11.421 ************************************ 00:33:11.421 END TEST bdev_fio_rw_verify 00:33:11.421 ************************************ 00:33:11.421 09:15:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:33:11.421 09:15:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:11.421 09:15:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:33:11.421 09:15:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:11.421 09:15:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:33:11.421 09:15:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:33:11.421 09:15:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:33:11.422 09:15:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:33:11.422 09:15:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:33:11.422 09:15:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:33:11.422 09:15:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:33:11.422 09:15:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:11.422 09:15:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:33:11.422 09:15:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:33:11.422 09:15:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:33:11.422 09:15:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:33:11.422 09:15:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "3de8dd57-8f47-40e8-b0a4-760f5380a7e2"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "3de8dd57-8f47-40e8-b0a4-760f5380a7e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:33:11.422 09:15:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Ceph0 ]] 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "3de8dd57-8f47-40e8-b0a4-760f5380a7e2"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "3de8dd57-8f47-40e8-b0a4-760f5380a7e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Ceph0]' 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Ceph0 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:11.681 ************************************ 00:33:11.681 START TEST bdev_fio_trim 00:33:11.681 ************************************ 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:11.681 09:15:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:11.940 job_Ceph0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:11.940 fio-3.35 00:33:11.940 Starting 1 thread 00:33:24.227 00:33:24.227 job_Ceph0: (groupid=0, jobs=1): err= 0: pid=125933: Thu Jul 25 09:15:29 2024 00:33:24.227 write: IOPS=843, BW=3372KiB/s (3453kB/s)(33.0MiB/10009msec); 0 zone resets 00:33:24.227 slat (usec): min=8, max=1256, avg=34.92, stdev=46.00 00:33:24.227 clat (usec): min=1741, max=304777, avg=9273.91, stdev=16049.53 00:33:24.227 lat (usec): min=1762, max=304792, avg=9308.83, stdev=16050.18 00:33:24.227 clat percentiles (msec): 00:33:24.227 | 50.000th=[ 9], 99.000th=[ 19], 99.900th=[ 305], 99.990th=[ 305], 00:33:24.227 | 99.999th=[ 305] 00:33:24.227 bw ( KiB/s): min= 208, max= 5520, per=99.97%, avg=3371.90, stdev=1102.76, samples=20 00:33:24.227 iops : min= 52, max= 1380, avg=842.95, stdev=275.66, samples=20 00:33:24.227 trim: IOPS=843, BW=3372KiB/s (3453kB/s)(33.0MiB/10009msec); 0 zone resets 00:33:24.227 slat (usec): min=5, max=1256, avg=19.05, stdev=29.95 00:33:24.227 clat (usec): min=3, max=9888, avg=138.54, stdev=249.40 00:33:24.227 lat (usec): min=16, max=10113, avg=157.58, stdev=252.38 00:33:24.227 clat percentiles (usec): 00:33:24.227 | 50.000th=[ 98], 99.000th=[ 586], 99.900th=[ 1139], 99.990th=[ 9896], 00:33:24.227 | 99.999th=[ 9896] 00:33:24.227 bw ( KiB/s): min= 208, max= 5496, per=100.00%, avg=3374.70, stdev=1104.42, samples=20 00:33:24.227 iops : min= 52, max= 1374, avg=843.65, stdev=276.07, samples=20 00:33:24.227 lat (usec) : 4=0.02%, 10=0.41%, 20=1.97%, 50=10.38%, 100=12.71% 00:33:24.227 lat (usec) : 250=17.87%, 500=5.78%, 750=0.71%, 1000=0.10% 00:33:24.227 lat (msec) : 2=0.03%, 4=3.48%, 10=34.24%, 20=11.83%, 50=0.17% 00:33:24.227 lat (msec) : 100=0.07%, 250=0.10%, 500=0.14% 00:33:24.227 cpu : usr=96.81%, sys=1.55%, ctx=1246, majf=0, minf=22311 00:33:24.227 IO depths : 1=0.1%, 2=0.4%, 4=23.2%, 8=76.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:24.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.227 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.227 issued rwts: total=0,8438,8438,0 short=0,0,0,0 dropped=0,0,0,0 00:33:24.227 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:24.227 00:33:24.227 Run status group 0 (all jobs): 00:33:24.227 WRITE: bw=3372KiB/s (3453kB/s), 3372KiB/s-3372KiB/s (3453kB/s-3453kB/s), io=33.0MiB (34.6MB), run=10009-10009msec 00:33:24.227 TRIM: bw=3372KiB/s (3453kB/s), 3372KiB/s-3372KiB/s (3453kB/s-3453kB/s), io=33.0MiB (34.6MB), run=10009-10009msec 00:33:24.484 ----------------------------------------------------- 00:33:24.484 Suppressions used: 00:33:24.484 count bytes template 00:33:24.484 1 6 /usr/src/fio/parse.c 00:33:24.484 1 8 libtcmalloc_minimal.so 00:33:24.484 1 904 libcrypto.so 00:33:24.484 ----------------------------------------------------- 00:33:24.484 00:33:24.484 00:33:24.484 real 0m12.905s 00:33:24.484 user 0m13.222s 00:33:24.484 sys 0m1.184s 00:33:24.484 09:15:31 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:24.484 09:15:31 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:33:24.484 ************************************ 00:33:24.484 END TEST bdev_fio_trim 00:33:24.484 ************************************ 00:33:24.484 09:15:31 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:33:24.484 09:15:31 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:24.484 09:15:31 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:33:24.484 /home/vagrant/spdk_repo/spdk 00:33:24.484 09:15:31 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:33:24.484 00:33:24.484 real 0m26.252s 00:33:24.484 user 0m27.176s 00:33:24.484 sys 0m3.130s 00:33:24.484 09:15:31 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:24.484 09:15:31 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:24.485 ************************************ 00:33:24.485 END TEST bdev_fio 00:33:24.485 ************************************ 00:33:24.743 09:15:31 blockdev_rbd -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:24.743 09:15:31 blockdev_rbd -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:24.743 09:15:31 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:33:24.743 09:15:31 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:24.743 09:15:31 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:24.743 ************************************ 00:33:24.743 START TEST bdev_verify 00:33:24.743 ************************************ 00:33:24.743 09:15:31 blockdev_rbd.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:24.743 [2024-07-25 09:15:31.798622] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:24.743 [2024-07-25 09:15:31.799365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126085 ] 00:33:25.002 [2024-07-25 09:15:31.971416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:25.267 [2024-07-25 09:15:32.257522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.267 [2024-07-25 09:15:32.257578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.478 [2024-07-25 09:15:36.253309] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:33:29.478 Running I/O for 5 seconds... 00:33:34.751 00:33:34.751 Latency(us) 00:33:34.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.751 Job: Ceph0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:34.751 Verification LBA range: start 0x0 length 0x1f400 00:33:34.751 Ceph0 : 5.02 2147.71 8.39 0.00 0.00 59450.84 1638.40 743618.96 00:33:34.751 Job: Ceph0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:34.751 Verification LBA range: start 0x1f400 length 0x1f400 00:33:34.751 Ceph0 : 5.04 1712.70 6.69 0.00 0.00 74483.51 4664.79 904797.46 00:33:34.751 =================================================================================================================== 00:33:34.751 Total : 3860.41 15.08 0.00 0.00 66135.12 1638.40 904797.46 00:33:36.131 00:33:36.131 real 0m11.466s 00:33:36.131 user 0m19.794s 00:33:36.131 sys 0m1.973s 00:33:36.131 09:15:43 blockdev_rbd.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:36.131 09:15:43 blockdev_rbd.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:33:36.131 ************************************ 00:33:36.131 END TEST bdev_verify 00:33:36.131 ************************************ 00:33:36.131 09:15:43 blockdev_rbd -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:36.131 09:15:43 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:33:36.131 09:15:43 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:36.131 09:15:43 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:36.131 ************************************ 00:33:36.131 START TEST bdev_verify_big_io 00:33:36.131 ************************************ 00:33:36.131 09:15:43 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:36.388 [2024-07-25 09:15:43.276784] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:36.388 [2024-07-25 09:15:43.276950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126232 ] 00:33:36.388 [2024-07-25 09:15:43.447063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:36.645 [2024-07-25 09:15:43.731280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.645 [2024-07-25 09:15:43.731340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.212 [2024-07-25 09:15:44.293831] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:33:37.472 Running I/O for 5 seconds... 00:33:42.739 00:33:42.739 Latency(us) 00:33:42.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.739 Job: Ceph0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:42.739 Verification LBA range: start 0x0 length 0x1f40 00:33:42.739 Ceph0 : 5.07 546.03 34.13 0.00 0.00 230061.66 5151.30 338841.15 00:33:42.739 Job: Ceph0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:42.739 Verification LBA range: start 0x1f40 length 0x1f40 00:33:42.739 Ceph0 : 5.11 528.12 33.01 0.00 0.00 236906.27 6496.36 468882.89 00:33:42.739 =================================================================================================================== 00:33:42.739 Total : 1074.15 67.13 0.00 0.00 233441.39 5151.30 468882.89 00:33:44.113 00:33:44.113 real 0m7.974s 00:33:44.113 user 0m15.320s 00:33:44.113 sys 0m1.240s 00:33:44.113 09:15:51 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:44.113 09:15:51 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:33:44.113 ************************************ 00:33:44.113 END TEST bdev_verify_big_io 00:33:44.113 ************************************ 00:33:44.113 09:15:51 blockdev_rbd -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:44.113 09:15:51 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:33:44.113 09:15:51 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:44.113 09:15:51 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:44.113 ************************************ 00:33:44.113 START TEST bdev_write_zeroes 00:33:44.113 ************************************ 00:33:44.113 09:15:51 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:44.372 [2024-07-25 09:15:51.299883] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:44.372 [2024-07-25 09:15:51.300051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126350 ] 00:33:44.372 [2024-07-25 09:15:51.468983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.938 [2024-07-25 09:15:51.763942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.198 [2024-07-25 09:15:52.309718] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:33:45.456 Running I/O for 1 seconds... 00:33:46.839 00:33:46.839 Latency(us) 00:33:46.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.839 Job: Ceph0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:46.839 Ceph0 : 1.52 3505.92 13.69 0.00 0.00 31065.14 4035.19 597093.06 00:33:46.839 =================================================================================================================== 00:33:46.839 Total : 3505.92 13.69 0.00 0.00 31065.14 4035.19 597093.06 00:33:48.774 00:33:48.774 real 0m4.332s 00:33:48.774 user 0m4.343s 00:33:48.774 sys 0m0.574s 00:33:48.774 09:15:55 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:48.774 09:15:55 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:33:48.774 ************************************ 00:33:48.774 END TEST bdev_write_zeroes 00:33:48.774 ************************************ 00:33:48.774 09:15:55 blockdev_rbd -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:48.774 09:15:55 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:33:48.774 09:15:55 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:48.774 09:15:55 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:48.774 ************************************ 00:33:48.774 START TEST bdev_json_nonenclosed 00:33:48.774 ************************************ 00:33:48.774 09:15:55 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:48.774 [2024-07-25 09:15:55.688398] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:48.774 [2024-07-25 09:15:55.688558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126428 ] 00:33:48.774 [2024-07-25 09:15:55.853095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.033 [2024-07-25 09:15:56.136419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.033 [2024-07-25 09:15:56.136527] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:33:49.033 [2024-07-25 09:15:56.136563] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:49.033 [2024-07-25 09:15:56.136588] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:49.599 00:33:49.599 real 0m1.134s 00:33:49.599 user 0m0.888s 00:33:49.599 sys 0m0.137s 00:33:49.599 09:15:56 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:49.599 09:15:56 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:33:49.599 ************************************ 00:33:49.599 END TEST bdev_json_nonenclosed 00:33:49.599 ************************************ 00:33:49.857 09:15:56 blockdev_rbd -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:49.857 09:15:56 blockdev_rbd -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:33:49.857 09:15:56 blockdev_rbd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:49.857 09:15:56 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:49.857 ************************************ 00:33:49.857 START TEST bdev_json_nonarray 00:33:49.857 ************************************ 00:33:49.857 09:15:56 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:49.857 [2024-07-25 09:15:56.858897] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:49.857 [2024-07-25 09:15:56.859073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126459 ] 00:33:50.116 [2024-07-25 09:15:57.014991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.373 [2024-07-25 09:15:57.298581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.373 [2024-07-25 09:15:57.298727] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:33:50.373 [2024-07-25 09:15:57.298774] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:50.373 [2024-07-25 09:15:57.298797] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:50.971 00:33:50.971 real 0m1.104s 00:33:50.971 user 0m0.858s 00:33:50.971 sys 0m0.137s 00:33:50.971 09:15:57 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:50.972 09:15:57 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:33:50.972 ************************************ 00:33:50.972 END TEST bdev_json_nonarray 00:33:50.972 ************************************ 00:33:50.972 09:15:57 blockdev_rbd -- bdev/blockdev.sh@786 -- # [[ rbd == bdev ]] 00:33:50.972 09:15:57 blockdev_rbd -- bdev/blockdev.sh@793 -- # [[ rbd == gpt ]] 00:33:50.972 09:15:57 blockdev_rbd -- bdev/blockdev.sh@797 -- # [[ rbd == crypto_sw ]] 00:33:50.972 09:15:57 blockdev_rbd -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:33:50.972 09:15:57 blockdev_rbd -- bdev/blockdev.sh@810 -- # cleanup 00:33:50.972 09:15:57 blockdev_rbd -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:33:50.972 09:15:57 blockdev_rbd -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:50.972 09:15:57 blockdev_rbd -- bdev/blockdev.sh@26 -- # [[ rbd == rbd ]] 00:33:50.972 09:15:57 blockdev_rbd -- bdev/blockdev.sh@27 -- # rbd_cleanup 00:33:50.972 09:15:57 blockdev_rbd -- common/autotest_common.sh@1033 -- # hash ceph 00:33:50.972 09:15:57 blockdev_rbd -- common/autotest_common.sh@1034 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:33:50.972 + base_dir=/var/tmp/ceph 00:33:50.972 + image=/var/tmp/ceph/ceph_raw.img 00:33:50.972 + dev=/dev/loop200 00:33:50.972 + pkill -9 ceph 00:33:50.972 + sleep 3 00:33:54.256 + umount /dev/loop200p2 00:33:54.256 + losetup -d /dev/loop200 00:33:54.256 + rm -rf /var/tmp/ceph 00:33:54.256 09:16:01 blockdev_rbd -- common/autotest_common.sh@1035 -- # rm -f /var/tmp/ceph_raw.img 00:33:54.256 09:16:01 blockdev_rbd -- bdev/blockdev.sh@30 -- # [[ rbd == daos ]] 00:33:54.256 09:16:01 blockdev_rbd -- bdev/blockdev.sh@34 -- # [[ rbd = \g\p\t ]] 00:33:54.256 09:16:01 blockdev_rbd -- bdev/blockdev.sh@40 -- # [[ rbd == xnvme ]] 00:33:54.256 ************************************ 00:33:54.256 END TEST blockdev_rbd 00:33:54.256 ************************************ 00:33:54.256 00:33:54.256 real 1m33.050s 00:33:54.256 user 1m52.777s 00:33:54.256 sys 0m11.905s 00:33:54.256 09:16:01 blockdev_rbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:54.256 09:16:01 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:54.256 09:16:01 -- spdk/autotest.sh@336 -- # run_test spdkcli_rbd /home/vagrant/spdk_repo/spdk/test/spdkcli/rbd.sh 00:33:54.256 09:16:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:54.256 09:16:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:54.256 09:16:01 -- common/autotest_common.sh@10 -- # set +x 00:33:54.516 ************************************ 00:33:54.516 START TEST spdkcli_rbd 00:33:54.516 ************************************ 00:33:54.516 09:16:01 spdkcli_rbd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/rbd.sh 00:33:54.516 * Looking for test storage... 00:33:54.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:54.516 09:16:01 spdkcli_rbd -- spdkcli/rbd.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:33:54.516 09:16:01 spdkcli_rbd -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:33:54.516 09:16:01 spdkcli_rbd -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:33:54.516 09:16:01 spdkcli_rbd -- spdkcli/rbd.sh@11 -- # MATCH_FILE=spdkcli_rbd.test 00:33:54.516 09:16:01 spdkcli_rbd -- spdkcli/rbd.sh@12 -- # SPDKCLI_BRANCH=/bdevs/rbd 00:33:54.516 09:16:01 spdkcli_rbd -- spdkcli/rbd.sh@14 -- # trap 'rbd_cleanup; cleanup' EXIT 00:33:54.516 09:16:01 spdkcli_rbd -- spdkcli/rbd.sh@15 -- # timing_enter run_spdk_tgt 00:33:54.516 09:16:01 spdkcli_rbd -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:54.516 09:16:01 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:54.516 09:16:01 spdkcli_rbd -- spdkcli/rbd.sh@16 -- # run_spdk_tgt 00:33:54.516 09:16:01 spdkcli_rbd -- spdkcli/common.sh@27 -- # spdk_tgt_pid=126588 00:33:54.516 09:16:01 spdkcli_rbd -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:33:54.516 09:16:01 spdkcli_rbd -- spdkcli/common.sh@28 -- # waitforlisten 126588 00:33:54.516 09:16:01 spdkcli_rbd -- common/autotest_common.sh@831 -- # '[' -z 126588 ']' 00:33:54.516 09:16:01 spdkcli_rbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.516 09:16:01 spdkcli_rbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:54.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.516 09:16:01 spdkcli_rbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.516 09:16:01 spdkcli_rbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:54.516 09:16:01 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:54.774 [2024-07-25 09:16:01.682141] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:54.774 [2024-07-25 09:16:01.682309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126588 ] 00:33:54.774 [2024-07-25 09:16:01.835794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:55.033 [2024-07-25 09:16:02.113914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.033 [2024-07-25 09:16:02.113946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.413 09:16:03 spdkcli_rbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:56.413 09:16:03 spdkcli_rbd -- common/autotest_common.sh@864 -- # return 0 00:33:56.413 09:16:03 spdkcli_rbd -- spdkcli/rbd.sh@17 -- # timing_exit run_spdk_tgt 00:33:56.413 09:16:03 spdkcli_rbd -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:56.413 09:16:03 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:56.413 09:16:03 spdkcli_rbd -- spdkcli/rbd.sh@19 -- # timing_enter spdkcli_create_rbd_config 00:33:56.413 09:16:03 spdkcli_rbd -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:56.413 09:16:03 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:56.413 09:16:03 spdkcli_rbd -- spdkcli/rbd.sh@20 -- # rbd_cleanup 00:33:56.413 09:16:03 spdkcli_rbd -- common/autotest_common.sh@1033 -- # hash ceph 00:33:56.413 09:16:03 spdkcli_rbd -- common/autotest_common.sh@1034 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:33:56.413 + base_dir=/var/tmp/ceph 00:33:56.413 + image=/var/tmp/ceph/ceph_raw.img 00:33:56.413 + dev=/dev/loop200 00:33:56.413 + pkill -9 ceph 00:33:56.414 + sleep 3 00:33:59.692 + umount /dev/loop200p2 00:33:59.692 umount: /dev/loop200p2: no mount point specified. 00:33:59.692 + losetup -d /dev/loop200 00:33:59.692 losetup: /dev/loop200: detach failed: No such device or address 00:33:59.692 + rm -rf /var/tmp/ceph 00:33:59.692 09:16:06 spdkcli_rbd -- common/autotest_common.sh@1035 -- # rm -f /var/tmp/ceph_raw.img 00:33:59.692 09:16:06 spdkcli_rbd -- spdkcli/rbd.sh@21 -- # rbd_setup 127.0.0.1 00:33:59.692 09:16:06 spdkcli_rbd -- common/autotest_common.sh@1007 -- # '[' -z 127.0.0.1 ']' 00:33:59.692 09:16:06 spdkcli_rbd -- common/autotest_common.sh@1011 -- # '[' -n '' ']' 00:33:59.692 09:16:06 spdkcli_rbd -- common/autotest_common.sh@1020 -- # hash ceph 00:33:59.692 09:16:06 spdkcli_rbd -- common/autotest_common.sh@1021 -- # export PG_NUM=128 00:33:59.692 09:16:06 spdkcli_rbd -- common/autotest_common.sh@1021 -- # PG_NUM=128 00:33:59.692 09:16:06 spdkcli_rbd -- common/autotest_common.sh@1022 -- # export RBD_POOL=rbd 00:33:59.692 09:16:06 spdkcli_rbd -- common/autotest_common.sh@1022 -- # RBD_POOL=rbd 00:33:59.692 09:16:06 spdkcli_rbd -- common/autotest_common.sh@1023 -- # export RBD_NAME=foo 00:33:59.692 09:16:06 spdkcli_rbd -- common/autotest_common.sh@1023 -- # RBD_NAME=foo 00:33:59.692 09:16:06 spdkcli_rbd -- common/autotest_common.sh@1024 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:33:59.692 + base_dir=/var/tmp/ceph 00:33:59.692 + image=/var/tmp/ceph/ceph_raw.img 00:33:59.692 + dev=/dev/loop200 00:33:59.692 + pkill -9 ceph 00:33:59.692 + sleep 3 00:34:02.225 + umount /dev/loop200p2 00:34:02.484 umount: /dev/loop200p2: no mount point specified. 00:34:02.484 + losetup -d /dev/loop200 00:34:02.484 losetup: /dev/loop200: detach failed: No such device or address 00:34:02.484 + rm -rf /var/tmp/ceph 00:34:02.484 09:16:09 spdkcli_rbd -- common/autotest_common.sh@1025 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 127.0.0.1 00:34:02.484 + set -e 00:34:02.484 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:34:02.484 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:34:02.484 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:34:02.484 + base_dir=/var/tmp/ceph 00:34:02.484 + mon_ip=127.0.0.1 00:34:02.484 + mon_dir=/var/tmp/ceph/mon.a 00:34:02.484 + pid_dir=/var/tmp/ceph/pid 00:34:02.484 + ceph_conf=/var/tmp/ceph/ceph.conf 00:34:02.484 + mnt_dir=/var/tmp/ceph/mnt 00:34:02.484 + image=/var/tmp/ceph_raw.img 00:34:02.484 + dev=/dev/loop200 00:34:02.484 + modprobe loop 00:34:02.484 + umount /dev/loop200p2 00:34:02.484 umount: /dev/loop200p2: no mount point specified. 00:34:02.484 + true 00:34:02.484 + losetup -d /dev/loop200 00:34:02.484 losetup: /dev/loop200: detach failed: No such device or address 00:34:02.484 + true 00:34:02.484 + '[' -d /var/tmp/ceph ']' 00:34:02.484 + mkdir /var/tmp/ceph 00:34:02.484 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:34:02.484 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:34:02.484 + fallocate -l 4G /var/tmp/ceph_raw.img 00:34:02.484 + mknod /dev/loop200 b 7 200 00:34:02.484 mknod: /dev/loop200: File exists 00:34:02.484 + true 00:34:02.484 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:34:02.484 Partitioning /dev/loop200 00:34:02.484 + PARTED='parted -s' 00:34:02.484 + SGDISK=sgdisk 00:34:02.484 + echo 'Partitioning /dev/loop200' 00:34:02.484 + parted -s /dev/loop200 mktable gpt 00:34:02.484 + sleep 2 00:34:04.389 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:34:04.389 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:34:04.389 Setting name on /dev/loop200 00:34:04.389 + partno=0 00:34:04.389 + echo 'Setting name on /dev/loop200' 00:34:04.389 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:34:05.810 Warning: The kernel is still using the old partition table. 00:34:05.810 The new table will be used at the next reboot or after you 00:34:05.810 run partprobe(8) or kpartx(8) 00:34:05.810 The operation has completed successfully. 00:34:05.810 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:34:06.748 Warning: The kernel is still using the old partition table. 00:34:06.748 The new table will be used at the next reboot or after you 00:34:06.748 run partprobe(8) or kpartx(8) 00:34:06.748 The operation has completed successfully. 00:34:06.748 + kpartx /dev/loop200 00:34:06.748 loop200p1 : 0 4192256 /dev/loop200 2048 00:34:06.748 loop200p2 : 0 4192256 /dev/loop200 4194304 00:34:06.748 ++ ceph -v 00:34:06.748 ++ awk '{print $3}' 00:34:06.748 + ceph_version=17.2.7 00:34:06.748 + ceph_maj=17 00:34:06.748 + '[' 17 -gt 12 ']' 00:34:06.748 + update_config=true 00:34:06.748 + rm -f /var/log/ceph/ceph-mon.a.log 00:34:06.748 + set_min_mon_release='--set-min-mon-release 14' 00:34:06.748 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:34:06.748 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:34:06.748 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:34:06.748 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:34:06.748 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:34:06.748 = sectsz=512 attr=2, projid32bit=1 00:34:06.748 = crc=1 finobt=1, sparse=1, rmapbt=0 00:34:06.748 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:34:06.748 data = bsize=4096 blocks=524032, imaxpct=25 00:34:06.748 = sunit=0 swidth=0 blks 00:34:06.748 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:34:06.748 log =internal log bsize=4096 blocks=16384, version=2 00:34:06.748 = sectsz=512 sunit=0 blks, lazy-count=1 00:34:06.748 realtime =none extsz=4096 blocks=0, rtextents=0 00:34:06.748 Discarding blocks...Done. 00:34:06.748 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:34:06.748 + cat 00:34:06.748 + rm -rf '/var/tmp/ceph/mon.a/*' 00:34:06.748 + mkdir -p /var/tmp/ceph/mon.a 00:34:06.748 + mkdir -p /var/tmp/ceph/pid 00:34:06.748 + rm -f /etc/ceph/ceph.client.admin.keyring 00:34:06.748 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:34:07.007 creating /var/tmp/ceph/keyring 00:34:07.007 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:34:07.007 + monmaptool --create --clobber --add a 127.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:34:07.007 monmaptool: monmap file /var/tmp/ceph/monmap 00:34:07.007 monmaptool: generated fsid 6a09d2d4-394e-46ab-9672-4943d0be8be1 00:34:07.007 setting min_mon_release = octopus 00:34:07.007 epoch 0 00:34:07.007 fsid 6a09d2d4-394e-46ab-9672-4943d0be8be1 00:34:07.007 last_changed 2024-07-25T09:16:13.945794+0000 00:34:07.007 created 2024-07-25T09:16:13.945794+0000 00:34:07.007 min_mon_release 15 (octopus) 00:34:07.007 election_strategy: 1 00:34:07.007 0: v2:127.0.0.1:12046/0 mon.a 00:34:07.007 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:34:07.007 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:34:07.007 + '[' true = true ']' 00:34:07.007 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:34:07.007 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:34:07.007 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:34:07.007 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:34:07.007 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:34:07.007 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:34:07.007 ++ hostname 00:34:07.007 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:34:07.266 + true 00:34:07.266 + '[' true = true ']' 00:34:07.266 + ceph-conf --name mon.a --show-config-value log_file 00:34:07.266 /var/log/ceph/ceph-mon.a.log 00:34:07.266 ++ awk '{print $2}' 00:34:07.266 ++ ceph -s 00:34:07.266 ++ grep id 00:34:07.525 + fsid=6a09d2d4-394e-46ab-9672-4943d0be8be1 00:34:07.525 + sed -i 's/perf = true/perf = true\n\tfsid = 6a09d2d4-394e-46ab-9672-4943d0be8be1 \n/g' /var/tmp/ceph/ceph.conf 00:34:07.525 + (( ceph_maj < 18 )) 00:34:07.525 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:34:07.525 + cat /var/tmp/ceph/ceph.conf 00:34:07.525 [global] 00:34:07.525 debug_lockdep = 0/0 00:34:07.525 debug_context = 0/0 00:34:07.525 debug_crush = 0/0 00:34:07.525 debug_buffer = 0/0 00:34:07.525 debug_timer = 0/0 00:34:07.525 debug_filer = 0/0 00:34:07.525 debug_objecter = 0/0 00:34:07.525 debug_rados = 0/0 00:34:07.525 debug_rbd = 0/0 00:34:07.525 debug_ms = 0/0 00:34:07.525 debug_monc = 0/0 00:34:07.525 debug_tp = 0/0 00:34:07.525 debug_auth = 0/0 00:34:07.525 debug_finisher = 0/0 00:34:07.525 debug_heartbeatmap = 0/0 00:34:07.525 debug_perfcounter = 0/0 00:34:07.525 debug_asok = 0/0 00:34:07.525 debug_throttle = 0/0 00:34:07.525 debug_mon = 0/0 00:34:07.525 debug_paxos = 0/0 00:34:07.525 debug_rgw = 0/0 00:34:07.525 00:34:07.525 perf = true 00:34:07.525 osd objectstore = filestore 00:34:07.525 00:34:07.525 fsid = 6a09d2d4-394e-46ab-9672-4943d0be8be1 00:34:07.525 00:34:07.525 mutex_perf_counter = false 00:34:07.525 throttler_perf_counter = false 00:34:07.525 rbd cache = false 00:34:07.525 mon_allow_pool_delete = true 00:34:07.525 00:34:07.525 osd_pool_default_size = 1 00:34:07.525 00:34:07.525 [mon] 00:34:07.525 mon_max_pool_pg_num=166496 00:34:07.525 mon_osd_max_split_count = 10000 00:34:07.525 mon_pg_warn_max_per_osd = 10000 00:34:07.525 00:34:07.525 [osd] 00:34:07.525 osd_op_threads = 64 00:34:07.525 filestore_queue_max_ops=5000 00:34:07.525 filestore_queue_committing_max_ops=5000 00:34:07.525 journal_max_write_entries=1000 00:34:07.525 journal_queue_max_ops=3000 00:34:07.525 objecter_inflight_ops=102400 00:34:07.525 filestore_wbthrottle_enable=false 00:34:07.525 filestore_queue_max_bytes=1048576000 00:34:07.525 filestore_queue_committing_max_bytes=1048576000 00:34:07.525 journal_max_write_bytes=1048576000 00:34:07.525 journal_queue_max_bytes=1048576000 00:34:07.525 ms_dispatch_throttle_bytes=1048576000 00:34:07.525 objecter_inflight_op_bytes=1048576000 00:34:07.525 filestore_max_sync_interval=10 00:34:07.525 osd_client_message_size_cap = 0 00:34:07.525 osd_client_message_cap = 0 00:34:07.525 osd_enable_op_tracker = false 00:34:07.525 filestore_fd_cache_size = 10240 00:34:07.525 filestore_fd_cache_shards = 64 00:34:07.525 filestore_op_threads = 16 00:34:07.525 osd_op_num_shards = 48 00:34:07.525 osd_op_num_threads_per_shard = 2 00:34:07.525 osd_pg_object_context_cache_count = 10240 00:34:07.525 filestore_odsync_write = True 00:34:07.525 journal_dynamic_throttle = True 00:34:07.525 00:34:07.525 [osd.0] 00:34:07.525 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:34:07.525 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:34:07.525 00:34:07.525 # add mon address 00:34:07.525 [mon.a] 00:34:07.525 mon addr = v2:127.0.0.1:12046 00:34:07.525 + i=0 00:34:07.525 + mkdir -p /var/tmp/ceph/mnt 00:34:07.525 ++ uuidgen 00:34:07.525 + uuid=c016a668-12a9-4f07-941a-70354410821b 00:34:07.525 + ceph -c /var/tmp/ceph/ceph.conf osd create c016a668-12a9-4f07-941a-70354410821b 0 00:34:07.786 0 00:34:07.786 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid c016a668-12a9-4f07-941a-70354410821b --check-needs-journal --no-mon-config 00:34:07.786 2024-07-25T09:16:14.737+0000 7f115a638400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:34:07.786 2024-07-25T09:16:14.737+0000 7f115a638400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:34:07.786 2024-07-25T09:16:14.776+0000 7f115a638400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected c016a668-12a9-4f07-941a-70354410821b, invalid (someone else's?) journal 00:34:07.786 2024-07-25T09:16:14.796+0000 7f115a638400 -1 journal do_read_entry(4096): bad header magic 00:34:07.786 2024-07-25T09:16:14.796+0000 7f115a638400 -1 journal do_read_entry(4096): bad header magic 00:34:07.786 ++ hostname 00:34:07.786 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:34:08.723 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:34:08.723 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:34:08.981 added key for osd.0 00:34:08.981 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:34:09.239 + class_dir=/lib64/rados-classes 00:34:09.239 + [[ -e /lib64/rados-classes ]] 00:34:09.239 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:34:09.499 + pkill -9 ceph-osd 00:34:09.499 + true 00:34:09.499 + sleep 2 00:34:12.033 + mkdir -p /var/tmp/ceph/pid 00:34:12.033 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:34:12.033 2024-07-25T09:16:18.666+0000 7fb8f6aab400 -1 Falling back to public interface 00:34:12.033 2024-07-25T09:16:18.704+0000 7fb8f6aab400 -1 journal do_read_entry(8192): bad header magic 00:34:12.033 2024-07-25T09:16:18.704+0000 7fb8f6aab400 -1 journal do_read_entry(8192): bad header magic 00:34:12.033 2024-07-25T09:16:18.712+0000 7fb8f6aab400 -1 osd.0 0 log_to_monitors true 00:34:12.033 09:16:18 spdkcli_rbd -- common/autotest_common.sh@1027 -- # ceph osd pool create rbd 128 00:34:12.969 pool 'rbd' created 00:34:12.969 09:16:19 spdkcli_rbd -- common/autotest_common.sh@1028 -- # rbd create foo --size 1000 00:34:16.255 09:16:22 spdkcli_rbd -- spdkcli/rbd.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py '"/bdevs/rbd create rbd foo 512'\'' '\''Ceph0'\'' True "/bdevs/rbd' create rbd foo 512 Ceph1 'True 00:34:16.255 timing_exit spdkcli_create_rbd_config 00:34:16.255 00:34:16.255 timing_enter spdkcli_check_match 00:34:16.255 check_match 00:34:16.255 timing_exit spdkcli_check_match 00:34:16.255 00:34:16.255 timing_enter spdkcli_clear_rbd_config 00:34:16.255 /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py "/bdevs/rbd' delete Ceph0 Ceph0 '"/bdevs/rbd delete_all'\'' '\''Ceph1'\'' ' 00:34:16.513 Executing command: [' ', True] 00:34:16.513 09:16:23 spdkcli_rbd -- spdkcli/rbd.sh@31 -- # rbd_cleanup 00:34:16.513 09:16:23 spdkcli_rbd -- common/autotest_common.sh@1033 -- # hash ceph 00:34:16.513 09:16:23 spdkcli_rbd -- common/autotest_common.sh@1034 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:34:16.513 + base_dir=/var/tmp/ceph 00:34:16.513 + image=/var/tmp/ceph/ceph_raw.img 00:34:16.513 + dev=/dev/loop200 00:34:16.513 + pkill -9 ceph 00:34:16.772 + sleep 3 00:34:20.065 + umount /dev/loop200p2 00:34:20.065 + losetup -d /dev/loop200 00:34:20.065 + rm -rf /var/tmp/ceph 00:34:20.065 09:16:26 spdkcli_rbd -- common/autotest_common.sh@1035 -- # rm -f /var/tmp/ceph_raw.img 00:34:20.065 09:16:26 spdkcli_rbd -- spdkcli/rbd.sh@32 -- # timing_exit spdkcli_clear_rbd_config 00:34:20.065 09:16:26 spdkcli_rbd -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:20.065 09:16:26 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:34:20.065 09:16:26 spdkcli_rbd -- spdkcli/rbd.sh@34 -- # killprocess 126588 00:34:20.065 09:16:26 spdkcli_rbd -- common/autotest_common.sh@950 -- # '[' -z 126588 ']' 00:34:20.065 09:16:26 spdkcli_rbd -- common/autotest_common.sh@954 -- # kill -0 126588 00:34:20.065 09:16:26 spdkcli_rbd -- common/autotest_common.sh@955 -- # uname 00:34:20.065 09:16:26 spdkcli_rbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:20.065 09:16:26 spdkcli_rbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 126588 00:34:20.065 09:16:26 spdkcli_rbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:20.065 09:16:26 spdkcli_rbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:20.065 09:16:26 spdkcli_rbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 126588' 00:34:20.065 killing process with pid 126588 00:34:20.065 09:16:26 spdkcli_rbd -- common/autotest_common.sh@969 -- # kill 126588 00:34:20.065 09:16:26 spdkcli_rbd -- common/autotest_common.sh@974 -- # wait 126588 00:34:23.352 09:16:29 spdkcli_rbd -- spdkcli/rbd.sh@1 -- # rbd_cleanup 00:34:23.352 09:16:29 spdkcli_rbd -- common/autotest_common.sh@1033 -- # hash ceph 00:34:23.352 09:16:29 spdkcli_rbd -- common/autotest_common.sh@1034 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:34:23.352 + base_dir=/var/tmp/ceph 00:34:23.352 + image=/var/tmp/ceph/ceph_raw.img 00:34:23.352 + dev=/dev/loop200 00:34:23.352 + pkill -9 ceph 00:34:23.352 + sleep 3 00:34:25.916 + umount /dev/loop200p2 00:34:25.916 umount: /dev/loop200p2: no mount point specified. 00:34:25.916 + losetup -d /dev/loop200 00:34:25.916 losetup: /dev/loop200: detach failed: No such device or address 00:34:25.916 + rm -rf /var/tmp/ceph 00:34:25.916 09:16:32 spdkcli_rbd -- common/autotest_common.sh@1035 -- # rm -f /var/tmp/ceph_raw.img 00:34:25.916 09:16:32 spdkcli_rbd -- spdkcli/rbd.sh@1 -- # cleanup 00:34:25.916 09:16:32 spdkcli_rbd -- spdkcli/common.sh@10 -- # '[' -n 126588 ']' 00:34:25.916 09:16:32 spdkcli_rbd -- spdkcli/common.sh@11 -- # killprocess 126588 00:34:25.916 Process with pid 126588 is not found 00:34:25.916 09:16:32 spdkcli_rbd -- common/autotest_common.sh@950 -- # '[' -z 126588 ']' 00:34:25.916 09:16:32 spdkcli_rbd -- common/autotest_common.sh@954 -- # kill -0 126588 00:34:25.916 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (126588) - No such process 00:34:25.916 09:16:32 spdkcli_rbd -- common/autotest_common.sh@977 -- # echo 'Process with pid 126588 is not found' 00:34:25.916 09:16:32 spdkcli_rbd -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:34:25.916 09:16:32 spdkcli_rbd -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:25.916 09:16:32 spdkcli_rbd -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:25.916 09:16:32 spdkcli_rbd -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_rbd.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:25.916 00:34:25.916 real 0m31.488s 00:34:25.916 user 0m57.500s 00:34:25.916 sys 0m1.518s 00:34:25.916 09:16:32 spdkcli_rbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:25.916 09:16:32 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:34:25.916 ************************************ 00:34:25.916 END TEST spdkcli_rbd 00:34:25.916 ************************************ 00:34:25.916 09:16:32 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:34:25.916 09:16:32 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:34:25.916 09:16:32 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:34:25.916 09:16:32 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:34:25.916 09:16:32 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:34:25.916 09:16:32 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:34:25.916 09:16:32 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:34:25.916 09:16:32 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:34:25.916 09:16:32 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:34:25.916 09:16:32 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:34:25.916 09:16:32 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:34:25.916 09:16:32 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:34:25.916 09:16:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:25.916 09:16:32 -- common/autotest_common.sh@10 -- # set +x 00:34:25.916 09:16:32 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:34:25.916 09:16:32 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:34:25.916 09:16:32 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:34:25.916 09:16:32 -- common/autotest_common.sh@10 -- # set +x 00:34:27.823 INFO: APP EXITING 00:34:27.823 INFO: killing all VMs 00:34:27.823 INFO: killing vhost app 00:34:27.823 INFO: EXIT DONE 00:34:28.391 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:28.391 Waiting for block devices as requested 00:34:28.391 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:28.391 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:29.329 0000:00:10.0 (1b36 0010): Active devices: data@nvme1n1, so not binding PCI dev 00:34:29.329 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:29.329 Cleaning 00:34:29.329 Removing: /var/run/dpdk/spdk0/config 00:34:29.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:29.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:29.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:29.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:29.329 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:29.329 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:29.329 Removing: /var/run/dpdk/spdk1/config 00:34:29.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:29.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:29.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:29.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:29.329 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:29.329 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:29.329 Removing: /dev/shm/iscsi_trace.pid77563 00:34:29.329 Removing: /dev/shm/spdk_tgt_trace.pid59195 00:34:29.329 Removing: /var/run/dpdk/spdk0 00:34:29.330 Removing: /var/run/dpdk/spdk1 00:34:29.330 Removing: /var/run/dpdk/spdk_pid122390 00:34:29.330 Removing: /var/run/dpdk/spdk_pid122719 00:34:29.330 Removing: /var/run/dpdk/spdk_pid122769 00:34:29.330 Removing: /var/run/dpdk/spdk_pid122866 00:34:29.330 Removing: /var/run/dpdk/spdk_pid122946 00:34:29.589 Removing: /var/run/dpdk/spdk_pid123032 00:34:29.589 Removing: /var/run/dpdk/spdk_pid123238 00:34:29.589 Removing: /var/run/dpdk/spdk_pid123289 00:34:29.589 Removing: /var/run/dpdk/spdk_pid123328 00:34:29.589 Removing: /var/run/dpdk/spdk_pid123366 00:34:29.589 Removing: /var/run/dpdk/spdk_pid123404 00:34:29.589 Removing: /var/run/dpdk/spdk_pid123524 00:34:29.589 Removing: /var/run/dpdk/spdk_pid123567 00:34:29.589 Removing: /var/run/dpdk/spdk_pid123821 00:34:29.589 Removing: /var/run/dpdk/spdk_pid124153 00:34:29.589 Removing: /var/run/dpdk/spdk_pid124427 00:34:29.589 Removing: /var/run/dpdk/spdk_pid125334 00:34:29.589 Removing: /var/run/dpdk/spdk_pid125396 00:34:29.589 Removing: /var/run/dpdk/spdk_pid125702 00:34:29.589 Removing: /var/run/dpdk/spdk_pid125899 00:34:29.589 Removing: /var/run/dpdk/spdk_pid126085 00:34:29.589 Removing: /var/run/dpdk/spdk_pid126232 00:34:29.589 Removing: /var/run/dpdk/spdk_pid126350 00:34:29.589 Removing: /var/run/dpdk/spdk_pid126428 00:34:29.589 Removing: /var/run/dpdk/spdk_pid126459 00:34:29.589 Removing: /var/run/dpdk/spdk_pid126588 00:34:29.589 Removing: /var/run/dpdk/spdk_pid58957 00:34:29.589 Removing: /var/run/dpdk/spdk_pid59195 00:34:29.589 Removing: /var/run/dpdk/spdk_pid59422 00:34:29.589 Removing: /var/run/dpdk/spdk_pid59526 00:34:29.589 Removing: /var/run/dpdk/spdk_pid59586 00:34:29.589 Removing: /var/run/dpdk/spdk_pid59721 00:34:29.589 Removing: /var/run/dpdk/spdk_pid59745 00:34:29.589 Removing: /var/run/dpdk/spdk_pid59899 00:34:29.589 Removing: /var/run/dpdk/spdk_pid60100 00:34:29.589 Removing: /var/run/dpdk/spdk_pid60299 00:34:29.589 Removing: /var/run/dpdk/spdk_pid60402 00:34:29.589 Removing: /var/run/dpdk/spdk_pid60511 00:34:29.589 Removing: /var/run/dpdk/spdk_pid60625 00:34:29.589 Removing: /var/run/dpdk/spdk_pid60725 00:34:29.589 Removing: /var/run/dpdk/spdk_pid60770 00:34:29.589 Removing: /var/run/dpdk/spdk_pid60812 00:34:29.589 Removing: /var/run/dpdk/spdk_pid60880 00:34:29.589 Removing: /var/run/dpdk/spdk_pid60992 00:34:29.589 Removing: /var/run/dpdk/spdk_pid61435 00:34:29.589 Removing: /var/run/dpdk/spdk_pid61523 00:34:29.589 Removing: /var/run/dpdk/spdk_pid61603 00:34:29.589 Removing: /var/run/dpdk/spdk_pid61624 00:34:29.589 Removing: /var/run/dpdk/spdk_pid61789 00:34:29.589 Removing: /var/run/dpdk/spdk_pid61805 00:34:29.589 Removing: /var/run/dpdk/spdk_pid61975 00:34:29.589 Removing: /var/run/dpdk/spdk_pid61997 00:34:29.589 Removing: /var/run/dpdk/spdk_pid62072 00:34:29.589 Removing: /var/run/dpdk/spdk_pid62101 00:34:29.589 Removing: /var/run/dpdk/spdk_pid62165 00:34:29.589 Removing: /var/run/dpdk/spdk_pid62194 00:34:29.589 Removing: /var/run/dpdk/spdk_pid62397 00:34:29.589 Removing: /var/run/dpdk/spdk_pid62440 00:34:29.589 Removing: /var/run/dpdk/spdk_pid62521 00:34:29.590 Removing: /var/run/dpdk/spdk_pid62876 00:34:29.590 Removing: /var/run/dpdk/spdk_pid62907 00:34:29.590 Removing: /var/run/dpdk/spdk_pid62938 00:34:29.590 Removing: /var/run/dpdk/spdk_pid62988 00:34:29.590 Removing: /var/run/dpdk/spdk_pid62993 00:34:29.590 Removing: /var/run/dpdk/spdk_pid63022 00:34:29.590 Removing: /var/run/dpdk/spdk_pid63051 00:34:29.590 Removing: /var/run/dpdk/spdk_pid63067 00:34:29.590 Removing: /var/run/dpdk/spdk_pid63123 00:34:29.590 Removing: /var/run/dpdk/spdk_pid63144 00:34:29.849 Removing: /var/run/dpdk/spdk_pid63213 00:34:29.849 Removing: /var/run/dpdk/spdk_pid63306 00:34:29.849 Removing: /var/run/dpdk/spdk_pid64090 00:34:29.849 Removing: /var/run/dpdk/spdk_pid65903 00:34:29.849 Removing: /var/run/dpdk/spdk_pid66208 00:34:29.849 Removing: /var/run/dpdk/spdk_pid66546 00:34:29.849 Removing: /var/run/dpdk/spdk_pid66822 00:34:29.849 Removing: /var/run/dpdk/spdk_pid67467 00:34:29.849 Removing: /var/run/dpdk/spdk_pid72301 00:34:29.849 Removing: /var/run/dpdk/spdk_pid76382 00:34:29.849 Removing: /var/run/dpdk/spdk_pid77177 00:34:29.849 Removing: /var/run/dpdk/spdk_pid77221 00:34:29.849 Removing: /var/run/dpdk/spdk_pid77563 00:34:29.849 Removing: /var/run/dpdk/spdk_pid78950 00:34:29.849 Removing: /var/run/dpdk/spdk_pid79366 00:34:29.849 Removing: /var/run/dpdk/spdk_pid79431 00:34:29.849 Removing: /var/run/dpdk/spdk_pid79845 00:34:29.849 Removing: /var/run/dpdk/spdk_pid82256 00:34:29.849 Clean 00:34:29.849 09:16:36 -- common/autotest_common.sh@1451 -- # return 0 00:34:29.849 09:16:36 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:34:29.849 09:16:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:29.849 09:16:36 -- common/autotest_common.sh@10 -- # set +x 00:34:29.849 09:16:36 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:34:29.849 09:16:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:29.849 09:16:36 -- common/autotest_common.sh@10 -- # set +x 00:34:29.849 09:16:36 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:29.849 09:16:36 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:29.849 09:16:36 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:29.849 09:16:36 -- spdk/autotest.sh@395 -- # hash lcov 00:34:29.849 09:16:36 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:29.849 09:16:36 -- spdk/autotest.sh@397 -- # hostname 00:34:30.165 09:16:36 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:30.165 geninfo: WARNING: invalid characters removed from testname! 00:34:56.801 09:16:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:56.801 09:17:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:58.182 09:17:04 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:00.089 09:17:07 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:02.629 09:17:09 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:04.584 09:17:11 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:07.118 09:17:13 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:07.118 09:17:13 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:07.118 09:17:13 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:07.118 09:17:13 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:07.118 09:17:13 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:07.118 09:17:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.118 09:17:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.118 09:17:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.118 09:17:13 -- paths/export.sh@5 -- $ export PATH 00:35:07.118 09:17:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.118 09:17:13 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:35:07.118 09:17:13 -- common/autobuild_common.sh@447 -- $ date +%s 00:35:07.118 09:17:13 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721899033.XXXXXX 00:35:07.118 09:17:13 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721899033.VZyZcO 00:35:07.118 09:17:13 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:35:07.118 09:17:13 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:35:07.118 09:17:13 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:35:07.118 09:17:13 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:35:07.118 09:17:13 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:35:07.118 09:17:13 -- common/autobuild_common.sh@463 -- $ get_config_params 00:35:07.118 09:17:13 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:35:07.118 09:17:13 -- common/autotest_common.sh@10 -- $ set +x 00:35:07.118 09:17:13 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:35:07.118 09:17:13 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:35:07.118 09:17:13 -- pm/common@17 -- $ local monitor 00:35:07.118 09:17:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:07.118 09:17:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:07.118 09:17:13 -- pm/common@25 -- $ sleep 1 00:35:07.118 09:17:13 -- pm/common@21 -- $ date +%s 00:35:07.118 09:17:13 -- pm/common@21 -- $ date +%s 00:35:07.118 09:17:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721899033 00:35:07.118 09:17:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721899033 00:35:07.118 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721899033_collect-vmstat.pm.log 00:35:07.118 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721899033_collect-cpu-load.pm.log 00:35:08.054 09:17:14 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:35:08.054 09:17:14 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:35:08.054 09:17:14 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:35:08.054 09:17:14 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:08.054 09:17:14 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:35:08.054 09:17:14 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:08.054 09:17:14 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:08.054 09:17:14 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:08.054 09:17:14 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:08.054 09:17:14 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:08.054 09:17:14 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:08.054 09:17:14 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:08.054 09:17:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:35:08.054 09:17:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:35:08.054 09:17:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:08.054 09:17:14 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:35:08.054 09:17:14 -- pm/common@44 -- $ pid=129078 00:35:08.054 09:17:14 -- pm/common@50 -- $ kill -TERM 129078 00:35:08.054 09:17:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:08.054 09:17:14 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:35:08.054 09:17:14 -- pm/common@44 -- $ pid=129080 00:35:08.054 09:17:14 -- pm/common@50 -- $ kill -TERM 129080 00:35:08.054 + [[ -n 5329 ]] 00:35:08.054 + sudo kill 5329 00:35:08.066 [Pipeline] } 00:35:08.088 [Pipeline] // timeout 00:35:08.094 [Pipeline] } 00:35:08.115 [Pipeline] // stage 00:35:08.121 [Pipeline] } 00:35:08.141 [Pipeline] // catchError 00:35:08.153 [Pipeline] stage 00:35:08.155 [Pipeline] { (Stop VM) 00:35:08.170 [Pipeline] sh 00:35:08.451 + vagrant halt 00:35:11.735 ==> default: Halting domain... 00:35:18.320 [Pipeline] sh 00:35:18.605 + vagrant destroy -f 00:35:21.884 ==> default: Removing domain... 00:35:21.909 [Pipeline] sh 00:35:22.249 + mv output /var/jenkins/workspace/iscsi-vg-autotest/output 00:35:22.258 [Pipeline] } 00:35:22.278 [Pipeline] // stage 00:35:22.285 [Pipeline] } 00:35:22.304 [Pipeline] // dir 00:35:22.310 [Pipeline] } 00:35:22.327 [Pipeline] // wrap 00:35:22.334 [Pipeline] } 00:35:22.350 [Pipeline] // catchError 00:35:22.359 [Pipeline] stage 00:35:22.362 [Pipeline] { (Epilogue) 00:35:22.376 [Pipeline] sh 00:35:22.657 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:30.784 [Pipeline] catchError 00:35:30.786 [Pipeline] { 00:35:30.801 [Pipeline] sh 00:35:31.085 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:31.085 Artifacts sizes are good 00:35:31.095 [Pipeline] } 00:35:31.113 [Pipeline] // catchError 00:35:31.126 [Pipeline] archiveArtifacts 00:35:31.134 Archiving artifacts 00:35:32.608 [Pipeline] cleanWs 00:35:32.620 [WS-CLEANUP] Deleting project workspace... 00:35:32.620 [WS-CLEANUP] Deferred wipeout is used... 00:35:32.627 [WS-CLEANUP] done 00:35:32.629 [Pipeline] } 00:35:32.648 [Pipeline] // stage 00:35:32.654 [Pipeline] } 00:35:32.672 [Pipeline] // node 00:35:32.679 [Pipeline] End of Pipeline 00:35:32.716 Finished: SUCCESS